00:00:00.001 Started by upstream project "autotest-per-patch" build number 132789 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.051 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.052 The recommended git tool is: git 00:00:00.052 using credential 00000000-0000-0000-0000-000000000002 00:00:00.054 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.069 Fetching changes from the remote Git repository 00:00:00.072 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.094 Using shallow fetch with depth 1 00:00:00.094 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.094 > git --version # timeout=10 00:00:00.122 > git --version # 'git version 2.39.2' 00:00:00.122 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.160 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.160 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.149 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.159 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.171 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.171 > git config core.sparsecheckout # timeout=10 00:00:03.181 > git read-tree -mu HEAD # timeout=10 00:00:03.198 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.218 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.218 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.406 [Pipeline] Start of Pipeline 00:00:03.420 [Pipeline] library 00:00:03.421 Loading library shm_lib@master 00:00:03.422 Library shm_lib@master is cached. Copying from home. 00:00:03.440 [Pipeline] node 00:00:03.459 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.461 [Pipeline] { 00:00:03.472 [Pipeline] catchError 00:00:03.473 [Pipeline] { 00:00:03.487 [Pipeline] wrap 00:00:03.495 [Pipeline] { 00:00:03.500 [Pipeline] stage 00:00:03.501 [Pipeline] { (Prologue) 00:00:03.747 [Pipeline] sh 00:00:04.032 + logger -p user.info -t JENKINS-CI 00:00:04.048 [Pipeline] echo 00:00:04.050 Node: CYP9 00:00:04.056 [Pipeline] sh 00:00:04.356 [Pipeline] setCustomBuildProperty 00:00:04.363 [Pipeline] echo 00:00:04.364 Cleanup processes 00:00:04.368 [Pipeline] sh 00:00:04.653 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.653 3910860 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.667 [Pipeline] sh 00:00:04.956 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.956 ++ grep -v 'sudo pgrep' 00:00:04.956 ++ awk '{print $1}' 00:00:04.956 + sudo kill -9 00:00:04.956 + true 00:00:04.972 [Pipeline] cleanWs 00:00:04.982 [WS-CLEANUP] Deleting project workspace... 00:00:04.982 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.990 [WS-CLEANUP] done 00:00:04.993 [Pipeline] setCustomBuildProperty 00:00:05.003 [Pipeline] sh 00:00:05.291 + sudo git config --global --replace-all safe.directory '*' 00:00:05.385 [Pipeline] httpRequest 00:00:06.009 [Pipeline] echo 00:00:06.011 Sorcerer 10.211.164.112 is alive 00:00:06.019 [Pipeline] retry 00:00:06.021 [Pipeline] { 00:00:06.033 [Pipeline] httpRequest 00:00:06.037 HttpMethod: GET 00:00:06.037 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.038 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.042 Response Code: HTTP/1.1 200 OK 00:00:06.042 Success: Status code 200 is in the accepted range: 200,404 00:00:06.043 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.579 [Pipeline] } 00:00:06.596 [Pipeline] // retry 00:00:06.604 [Pipeline] sh 00:00:06.895 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.912 [Pipeline] httpRequest 00:00:07.239 [Pipeline] echo 00:00:07.241 Sorcerer 10.211.164.112 is alive 00:00:07.250 [Pipeline] retry 00:00:07.252 [Pipeline] { 00:00:07.265 [Pipeline] httpRequest 00:00:07.270 HttpMethod: GET 00:00:07.271 URL: http://10.211.164.112/packages/spdk_427915fc69f1c7c870d9bbd7edb265c30026340a.tar.gz 00:00:07.272 Sending request to url: http://10.211.164.112/packages/spdk_427915fc69f1c7c870d9bbd7edb265c30026340a.tar.gz 00:00:07.286 Response Code: HTTP/1.1 200 OK 00:00:07.286 Success: Status code 200 is in the accepted range: 200,404 00:00:07.287 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_427915fc69f1c7c870d9bbd7edb265c30026340a.tar.gz 00:01:01.660 [Pipeline] } 00:01:01.678 [Pipeline] // retry 00:01:01.686 [Pipeline] sh 00:01:01.978 + tar --no-same-owner -xf spdk_427915fc69f1c7c870d9bbd7edb265c30026340a.tar.gz 00:01:05.343 [Pipeline] sh 00:01:05.636 + git -C spdk log --oneline -n5 00:01:05.636 427915fc6 test/nvmf: Try to support iWARP under relevant tests 00:01:05.636 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:01:05.636 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:01:05.636 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:01:05.636 0ea9ac02f accel/mlx5: Create pool of UMRs 00:01:05.649 [Pipeline] } 00:01:05.665 [Pipeline] // stage 00:01:05.675 [Pipeline] stage 00:01:05.678 [Pipeline] { (Prepare) 00:01:05.698 [Pipeline] writeFile 00:01:05.715 [Pipeline] sh 00:01:06.006 + logger -p user.info -t JENKINS-CI 00:01:06.022 [Pipeline] sh 00:01:06.313 + logger -p user.info -t JENKINS-CI 00:01:06.327 [Pipeline] sh 00:01:06.618 + cat autorun-spdk.conf 00:01:06.618 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.618 SPDK_TEST_NVMF=1 00:01:06.618 SPDK_TEST_NVME_CLI=1 00:01:06.618 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:06.618 SPDK_TEST_NVMF_NICS=e810 00:01:06.618 SPDK_TEST_VFIOUSER=1 00:01:06.618 SPDK_RUN_UBSAN=1 00:01:06.618 NET_TYPE=phy 00:01:06.627 RUN_NIGHTLY=0 00:01:06.632 [Pipeline] readFile 00:01:06.660 [Pipeline] withEnv 00:01:06.662 [Pipeline] { 00:01:06.676 [Pipeline] sh 00:01:06.968 + set -ex 00:01:06.969 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:06.969 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:06.969 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.969 ++ SPDK_TEST_NVMF=1 00:01:06.969 ++ SPDK_TEST_NVME_CLI=1 00:01:06.969 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:06.969 ++ SPDK_TEST_NVMF_NICS=e810 00:01:06.969 ++ SPDK_TEST_VFIOUSER=1 00:01:06.969 ++ SPDK_RUN_UBSAN=1 00:01:06.969 ++ NET_TYPE=phy 00:01:06.969 ++ RUN_NIGHTLY=0 00:01:06.969 + case $SPDK_TEST_NVMF_NICS in 00:01:06.969 + DRIVERS=ice 00:01:06.969 + [[ tcp == \r\d\m\a ]] 00:01:06.969 + [[ -n ice ]] 00:01:06.969 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:06.969 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:06.969 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:06.969 rmmod: ERROR: Module irdma is not currently loaded 00:01:06.969 rmmod: ERROR: Module i40iw is not currently loaded 00:01:06.969 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:06.969 + true 00:01:06.969 + for D in $DRIVERS 00:01:06.969 + sudo modprobe ice 00:01:06.969 + exit 0 00:01:06.980 [Pipeline] } 00:01:06.994 [Pipeline] // withEnv 00:01:06.999 [Pipeline] } 00:01:07.012 [Pipeline] // stage 00:01:07.022 [Pipeline] catchError 00:01:07.024 [Pipeline] { 00:01:07.039 [Pipeline] timeout 00:01:07.039 Timeout set to expire in 1 hr 0 min 00:01:07.041 [Pipeline] { 00:01:07.057 [Pipeline] stage 00:01:07.060 [Pipeline] { (Tests) 00:01:07.076 [Pipeline] sh 00:01:07.368 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:07.368 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:07.368 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:07.368 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:07.368 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:07.368 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:07.368 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:07.368 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:07.368 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:07.368 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:07.368 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:07.368 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:07.368 + source /etc/os-release 00:01:07.368 ++ NAME='Fedora Linux' 00:01:07.368 ++ VERSION='39 (Cloud Edition)' 00:01:07.368 ++ ID=fedora 00:01:07.368 ++ VERSION_ID=39 00:01:07.368 ++ VERSION_CODENAME= 00:01:07.368 ++ PLATFORM_ID=platform:f39 00:01:07.368 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:07.368 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:07.368 ++ LOGO=fedora-logo-icon 00:01:07.368 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:07.368 ++ HOME_URL=https://fedoraproject.org/ 00:01:07.368 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:07.368 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:07.368 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:07.368 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:07.368 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:07.368 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:07.368 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:07.368 ++ SUPPORT_END=2024-11-12 00:01:07.368 ++ VARIANT='Cloud Edition' 00:01:07.368 ++ VARIANT_ID=cloud 00:01:07.368 + uname -a 00:01:07.368 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:07.368 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:10.673 Hugepages 00:01:10.673 node hugesize free / total 00:01:10.673 node0 1048576kB 0 / 0 00:01:10.673 node0 2048kB 0 / 0 00:01:10.673 node1 1048576kB 0 / 0 00:01:10.673 node1 2048kB 0 / 0 00:01:10.673 00:01:10.673 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:10.673 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:10.673 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:10.673 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:10.673 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:10.673 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:10.673 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:10.673 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:10.673 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:10.673 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:10.673 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:10.673 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:10.673 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:10.673 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:10.673 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:10.673 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:10.673 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:10.673 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:10.673 + rm -f /tmp/spdk-ld-path 00:01:10.673 + source autorun-spdk.conf 00:01:10.673 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.673 ++ SPDK_TEST_NVMF=1 00:01:10.673 ++ SPDK_TEST_NVME_CLI=1 00:01:10.673 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.673 ++ SPDK_TEST_NVMF_NICS=e810 00:01:10.673 ++ SPDK_TEST_VFIOUSER=1 00:01:10.673 ++ SPDK_RUN_UBSAN=1 00:01:10.673 ++ NET_TYPE=phy 00:01:10.673 ++ RUN_NIGHTLY=0 00:01:10.673 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:10.673 + [[ -n '' ]] 00:01:10.673 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:10.673 + for M in /var/spdk/build-*-manifest.txt 00:01:10.673 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:10.673 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:10.673 + for M in /var/spdk/build-*-manifest.txt 00:01:10.673 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:10.673 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:10.673 + for M in /var/spdk/build-*-manifest.txt 00:01:10.673 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:10.673 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:10.673 ++ uname 00:01:10.673 + [[ Linux == \L\i\n\u\x ]] 00:01:10.673 + sudo dmesg -T 00:01:10.673 + sudo dmesg --clear 00:01:10.673 + dmesg_pid=3912404 00:01:10.673 + [[ Fedora Linux == FreeBSD ]] 00:01:10.673 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:10.673 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:10.673 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:10.673 + [[ -x /usr/src/fio-static/fio ]] 00:01:10.673 + export FIO_BIN=/usr/src/fio-static/fio 00:01:10.673 + FIO_BIN=/usr/src/fio-static/fio 00:01:10.673 + sudo dmesg -Tw 00:01:10.673 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:10.673 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:10.673 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:10.673 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:10.673 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:10.673 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:10.673 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:10.673 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:10.673 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:10.673 11:36:18 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:10.673 11:36:18 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:10.673 11:36:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.673 11:36:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:10.673 11:36:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:10.673 11:36:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.673 11:36:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:10.673 11:36:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:10.673 11:36:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:10.673 11:36:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:10.673 11:36:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:10.673 11:36:18 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:10.673 11:36:18 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:10.935 11:36:18 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:10.935 11:36:18 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:10.935 11:36:18 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:10.935 11:36:18 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:10.935 11:36:18 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:10.935 11:36:18 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:10.935 11:36:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.935 11:36:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.935 11:36:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.935 11:36:18 -- paths/export.sh@5 -- $ export PATH 00:01:10.935 11:36:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.936 11:36:18 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:10.936 11:36:18 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:10.936 11:36:18 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733740578.XXXXXX 00:01:10.936 11:36:18 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733740578.IECMRY 00:01:10.936 11:36:18 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:10.936 11:36:18 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:10.936 11:36:18 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:10.936 11:36:18 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:10.936 11:36:18 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:10.936 11:36:18 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:10.936 11:36:18 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:10.936 11:36:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:10.936 11:36:18 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:10.936 11:36:18 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:10.936 11:36:18 -- pm/common@17 -- $ local monitor 00:01:10.936 11:36:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:10.936 11:36:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:10.936 11:36:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:10.936 11:36:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:10.936 11:36:18 -- pm/common@21 -- $ date +%s 00:01:10.936 11:36:18 -- pm/common@25 -- $ sleep 1 00:01:10.936 11:36:18 -- pm/common@21 -- $ date +%s 00:01:10.936 11:36:18 -- pm/common@21 -- $ date +%s 00:01:10.936 11:36:18 -- pm/common@21 -- $ date +%s 00:01:10.936 11:36:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733740578 00:01:10.936 11:36:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733740578 00:01:10.936 11:36:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733740578 00:01:10.936 11:36:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733740578 00:01:10.936 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733740578_collect-cpu-load.pm.log 00:01:10.936 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733740578_collect-vmstat.pm.log 00:01:10.936 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733740578_collect-cpu-temp.pm.log 00:01:10.936 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733740578_collect-bmc-pm.bmc.pm.log 00:01:11.880 11:36:19 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:11.880 11:36:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:11.880 11:36:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:11.880 11:36:19 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:11.880 11:36:19 -- spdk/autobuild.sh@16 -- $ date -u 00:01:11.880 Mon Dec 9 10:36:19 AM UTC 2024 00:01:11.880 11:36:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:11.880 v25.01-pre-312-g427915fc6 00:01:11.880 11:36:19 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:11.880 11:36:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:11.880 11:36:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:11.880 11:36:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:11.880 11:36:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:11.880 11:36:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:11.880 ************************************ 00:01:11.880 START TEST ubsan 00:01:11.880 ************************************ 00:01:11.880 11:36:19 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:11.880 using ubsan 00:01:11.880 00:01:11.880 real 0m0.001s 00:01:11.880 user 0m0.000s 00:01:11.880 sys 0m0.000s 00:01:11.880 11:36:19 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:11.880 11:36:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:11.880 ************************************ 00:01:11.880 END TEST ubsan 00:01:11.880 ************************************ 00:01:11.880 11:36:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:11.880 11:36:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:11.880 11:36:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:11.880 11:36:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:11.880 11:36:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:11.880 11:36:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:11.880 11:36:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:11.880 11:36:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:11.880 11:36:19 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:12.141 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:12.141 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:12.402 Using 'verbs' RDMA provider 00:01:28.257 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:40.503 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:41.076 Creating mk/config.mk...done. 00:01:41.076 Creating mk/cc.flags.mk...done. 00:01:41.076 Type 'make' to build. 00:01:41.076 11:36:48 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:41.076 11:36:48 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:41.076 11:36:48 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:41.076 11:36:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.076 ************************************ 00:01:41.076 START TEST make 00:01:41.076 ************************************ 00:01:41.076 11:36:48 make -- common/autotest_common.sh@1129 -- $ make -j144 00:01:41.649 make[1]: Nothing to be done for 'all'. 00:01:43.035 The Meson build system 00:01:43.035 Version: 1.5.0 00:01:43.035 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:43.035 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:43.035 Build type: native build 00:01:43.035 Project name: libvfio-user 00:01:43.035 Project version: 0.0.1 00:01:43.035 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:43.035 C linker for the host machine: cc ld.bfd 2.40-14 00:01:43.035 Host machine cpu family: x86_64 00:01:43.035 Host machine cpu: x86_64 00:01:43.035 Run-time dependency threads found: YES 00:01:43.035 Library dl found: YES 00:01:43.035 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:43.035 Run-time dependency json-c found: YES 0.17 00:01:43.035 Run-time dependency cmocka found: YES 1.1.7 00:01:43.035 Program pytest-3 found: NO 00:01:43.035 Program flake8 found: NO 00:01:43.035 Program misspell-fixer found: NO 00:01:43.035 Program restructuredtext-lint found: NO 00:01:43.035 Program valgrind found: YES (/usr/bin/valgrind) 00:01:43.035 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:43.035 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:43.035 Compiler for C supports arguments -Wwrite-strings: YES 00:01:43.035 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:43.035 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:43.035 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:43.035 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:43.035 Build targets in project: 8 00:01:43.035 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:43.035 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:43.035 00:01:43.035 libvfio-user 0.0.1 00:01:43.035 00:01:43.035 User defined options 00:01:43.035 buildtype : debug 00:01:43.035 default_library: shared 00:01:43.035 libdir : /usr/local/lib 00:01:43.035 00:01:43.035 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:43.296 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:43.296 [1/37] Compiling C object samples/null.p/null.c.o 00:01:43.296 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:43.296 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:43.296 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:43.296 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:43.296 [6/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:43.296 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:43.296 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:43.296 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:43.296 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:43.296 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:43.296 [12/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:43.296 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:43.296 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:43.296 [15/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:43.296 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:43.296 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:43.296 [18/37] Compiling C object samples/server.p/server.c.o 00:01:43.296 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:43.296 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:43.296 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:43.296 [22/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:43.296 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:43.296 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:43.296 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:43.296 [26/37] Compiling C object samples/client.p/client.c.o 00:01:43.296 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:43.296 [28/37] Linking target samples/client 00:01:43.296 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:43.557 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:43.557 [31/37] Linking target test/unit_tests 00:01:43.557 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:43.557 [33/37] Linking target samples/null 00:01:43.557 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:43.557 [35/37] Linking target samples/server 00:01:43.557 [36/37] Linking target samples/gpio-pci-idio-16 00:01:43.557 [37/37] Linking target samples/lspci 00:01:43.557 INFO: autodetecting backend as ninja 00:01:43.557 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:43.817 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:44.078 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:44.078 ninja: no work to do. 00:01:50.674 The Meson build system 00:01:50.674 Version: 1.5.0 00:01:50.674 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:50.674 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:50.674 Build type: native build 00:01:50.674 Program cat found: YES (/usr/bin/cat) 00:01:50.674 Project name: DPDK 00:01:50.674 Project version: 24.03.0 00:01:50.674 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:50.674 C linker for the host machine: cc ld.bfd 2.40-14 00:01:50.674 Host machine cpu family: x86_64 00:01:50.674 Host machine cpu: x86_64 00:01:50.674 Message: ## Building in Developer Mode ## 00:01:50.674 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:50.674 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:50.674 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:50.674 Program python3 found: YES (/usr/bin/python3) 00:01:50.674 Program cat found: YES (/usr/bin/cat) 00:01:50.674 Compiler for C supports arguments -march=native: YES 00:01:50.674 Checking for size of "void *" : 8 00:01:50.674 Checking for size of "void *" : 8 (cached) 00:01:50.674 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:50.674 Library m found: YES 00:01:50.674 Library numa found: YES 00:01:50.674 Has header "numaif.h" : YES 00:01:50.674 Library fdt found: NO 00:01:50.674 Library execinfo found: NO 00:01:50.674 Has header "execinfo.h" : YES 00:01:50.674 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:50.674 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:50.674 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:50.674 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:50.674 Run-time dependency openssl found: YES 3.1.1 00:01:50.674 Run-time dependency libpcap found: YES 1.10.4 00:01:50.674 Has header "pcap.h" with dependency libpcap: YES 00:01:50.674 Compiler for C supports arguments -Wcast-qual: YES 00:01:50.674 Compiler for C supports arguments -Wdeprecated: YES 00:01:50.674 Compiler for C supports arguments -Wformat: YES 00:01:50.675 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:50.675 Compiler for C supports arguments -Wformat-security: NO 00:01:50.675 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:50.675 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:50.675 Compiler for C supports arguments -Wnested-externs: YES 00:01:50.675 Compiler for C supports arguments -Wold-style-definition: YES 00:01:50.675 Compiler for C supports arguments -Wpointer-arith: YES 00:01:50.675 Compiler for C supports arguments -Wsign-compare: YES 00:01:50.675 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:50.675 Compiler for C supports arguments -Wundef: YES 00:01:50.675 Compiler for C supports arguments -Wwrite-strings: YES 00:01:50.675 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:50.675 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:50.675 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:50.675 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:50.675 Program objdump found: YES (/usr/bin/objdump) 00:01:50.675 Compiler for C supports arguments -mavx512f: YES 00:01:50.675 Checking if "AVX512 checking" compiles: YES 00:01:50.675 Fetching value of define "__SSE4_2__" : 1 00:01:50.675 Fetching value of define "__AES__" : 1 00:01:50.675 Fetching value of define "__AVX__" : 1 00:01:50.675 Fetching value of define "__AVX2__" : 1 00:01:50.675 Fetching value of define "__AVX512BW__" : 1 00:01:50.675 Fetching value of define "__AVX512CD__" : 1 00:01:50.675 Fetching value of define "__AVX512DQ__" : 1 00:01:50.675 Fetching value of define "__AVX512F__" : 1 00:01:50.675 Fetching value of define "__AVX512VL__" : 1 00:01:50.675 Fetching value of define "__PCLMUL__" : 1 00:01:50.675 Fetching value of define "__RDRND__" : 1 00:01:50.675 Fetching value of define "__RDSEED__" : 1 00:01:50.675 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:50.675 Fetching value of define "__znver1__" : (undefined) 00:01:50.675 Fetching value of define "__znver2__" : (undefined) 00:01:50.675 Fetching value of define "__znver3__" : (undefined) 00:01:50.675 Fetching value of define "__znver4__" : (undefined) 00:01:50.675 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:50.675 Message: lib/log: Defining dependency "log" 00:01:50.675 Message: lib/kvargs: Defining dependency "kvargs" 00:01:50.675 Message: lib/telemetry: Defining dependency "telemetry" 00:01:50.675 Checking for function "getentropy" : NO 00:01:50.675 Message: lib/eal: Defining dependency "eal" 00:01:50.675 Message: lib/ring: Defining dependency "ring" 00:01:50.675 Message: lib/rcu: Defining dependency "rcu" 00:01:50.675 Message: lib/mempool: Defining dependency "mempool" 00:01:50.675 Message: lib/mbuf: Defining dependency "mbuf" 00:01:50.675 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:50.675 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:50.675 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:50.675 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:50.675 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:50.675 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:50.675 Compiler for C supports arguments -mpclmul: YES 00:01:50.675 Compiler for C supports arguments -maes: YES 00:01:50.675 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:50.675 Compiler for C supports arguments -mavx512bw: YES 00:01:50.675 Compiler for C supports arguments -mavx512dq: YES 00:01:50.675 Compiler for C supports arguments -mavx512vl: YES 00:01:50.675 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:50.675 Compiler for C supports arguments -mavx2: YES 00:01:50.675 Compiler for C supports arguments -mavx: YES 00:01:50.675 Message: lib/net: Defining dependency "net" 00:01:50.675 Message: lib/meter: Defining dependency "meter" 00:01:50.675 Message: lib/ethdev: Defining dependency "ethdev" 00:01:50.675 Message: lib/pci: Defining dependency "pci" 00:01:50.675 Message: lib/cmdline: Defining dependency "cmdline" 00:01:50.675 Message: lib/hash: Defining dependency "hash" 00:01:50.675 Message: lib/timer: Defining dependency "timer" 00:01:50.675 Message: lib/compressdev: Defining dependency "compressdev" 00:01:50.675 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:50.675 Message: lib/dmadev: Defining dependency "dmadev" 00:01:50.675 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:50.675 Message: lib/power: Defining dependency "power" 00:01:50.675 Message: lib/reorder: Defining dependency "reorder" 00:01:50.675 Message: lib/security: Defining dependency "security" 00:01:50.675 Has header "linux/userfaultfd.h" : YES 00:01:50.675 Has header "linux/vduse.h" : YES 00:01:50.675 Message: lib/vhost: Defining dependency "vhost" 00:01:50.675 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:50.675 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:50.675 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:50.675 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:50.675 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:50.675 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:50.675 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:50.675 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:50.675 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:50.675 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:50.675 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:50.675 Configuring doxy-api-html.conf using configuration 00:01:50.675 Configuring doxy-api-man.conf using configuration 00:01:50.675 Program mandb found: YES (/usr/bin/mandb) 00:01:50.675 Program sphinx-build found: NO 00:01:50.675 Configuring rte_build_config.h using configuration 00:01:50.675 Message: 00:01:50.675 ================= 00:01:50.675 Applications Enabled 00:01:50.675 ================= 00:01:50.675 00:01:50.675 apps: 00:01:50.675 00:01:50.675 00:01:50.675 Message: 00:01:50.675 ================= 00:01:50.675 Libraries Enabled 00:01:50.675 ================= 00:01:50.675 00:01:50.675 libs: 00:01:50.675 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:50.675 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:50.675 cryptodev, dmadev, power, reorder, security, vhost, 00:01:50.675 00:01:50.675 Message: 00:01:50.675 =============== 00:01:50.675 Drivers Enabled 00:01:50.675 =============== 00:01:50.675 00:01:50.675 common: 00:01:50.675 00:01:50.675 bus: 00:01:50.675 pci, vdev, 00:01:50.675 mempool: 00:01:50.675 ring, 00:01:50.675 dma: 00:01:50.675 00:01:50.675 net: 00:01:50.675 00:01:50.675 crypto: 00:01:50.675 00:01:50.675 compress: 00:01:50.675 00:01:50.675 vdpa: 00:01:50.675 00:01:50.675 00:01:50.675 Message: 00:01:50.675 ================= 00:01:50.675 Content Skipped 00:01:50.675 ================= 00:01:50.675 00:01:50.675 apps: 00:01:50.675 dumpcap: explicitly disabled via build config 00:01:50.675 graph: explicitly disabled via build config 00:01:50.675 pdump: explicitly disabled via build config 00:01:50.675 proc-info: explicitly disabled via build config 00:01:50.675 test-acl: explicitly disabled via build config 00:01:50.675 test-bbdev: explicitly disabled via build config 00:01:50.675 test-cmdline: explicitly disabled via build config 00:01:50.675 test-compress-perf: explicitly disabled via build config 00:01:50.675 test-crypto-perf: explicitly disabled via build config 00:01:50.675 test-dma-perf: explicitly disabled via build config 00:01:50.675 test-eventdev: explicitly disabled via build config 00:01:50.675 test-fib: explicitly disabled via build config 00:01:50.675 test-flow-perf: explicitly disabled via build config 00:01:50.675 test-gpudev: explicitly disabled via build config 00:01:50.675 test-mldev: explicitly disabled via build config 00:01:50.675 test-pipeline: explicitly disabled via build config 00:01:50.675 test-pmd: explicitly disabled via build config 00:01:50.675 test-regex: explicitly disabled via build config 00:01:50.675 test-sad: explicitly disabled via build config 00:01:50.675 test-security-perf: explicitly disabled via build config 00:01:50.675 00:01:50.675 libs: 00:01:50.675 argparse: explicitly disabled via build config 00:01:50.675 metrics: explicitly disabled via build config 00:01:50.675 acl: explicitly disabled via build config 00:01:50.675 bbdev: explicitly disabled via build config 00:01:50.675 bitratestats: explicitly disabled via build config 00:01:50.675 bpf: explicitly disabled via build config 00:01:50.675 cfgfile: explicitly disabled via build config 00:01:50.675 distributor: explicitly disabled via build config 00:01:50.675 efd: explicitly disabled via build config 00:01:50.675 eventdev: explicitly disabled via build config 00:01:50.675 dispatcher: explicitly disabled via build config 00:01:50.675 gpudev: explicitly disabled via build config 00:01:50.675 gro: explicitly disabled via build config 00:01:50.675 gso: explicitly disabled via build config 00:01:50.675 ip_frag: explicitly disabled via build config 00:01:50.675 jobstats: explicitly disabled via build config 00:01:50.675 latencystats: explicitly disabled via build config 00:01:50.675 lpm: explicitly disabled via build config 00:01:50.675 member: explicitly disabled via build config 00:01:50.675 pcapng: explicitly disabled via build config 00:01:50.675 rawdev: explicitly disabled via build config 00:01:50.675 regexdev: explicitly disabled via build config 00:01:50.675 mldev: explicitly disabled via build config 00:01:50.675 rib: explicitly disabled via build config 00:01:50.675 sched: explicitly disabled via build config 00:01:50.675 stack: explicitly disabled via build config 00:01:50.675 ipsec: explicitly disabled via build config 00:01:50.675 pdcp: explicitly disabled via build config 00:01:50.675 fib: explicitly disabled via build config 00:01:50.675 port: explicitly disabled via build config 00:01:50.675 pdump: explicitly disabled via build config 00:01:50.675 table: explicitly disabled via build config 00:01:50.675 pipeline: explicitly disabled via build config 00:01:50.675 graph: explicitly disabled via build config 00:01:50.675 node: explicitly disabled via build config 00:01:50.675 00:01:50.675 drivers: 00:01:50.675 common/cpt: not in enabled drivers build config 00:01:50.675 common/dpaax: not in enabled drivers build config 00:01:50.675 common/iavf: not in enabled drivers build config 00:01:50.675 common/idpf: not in enabled drivers build config 00:01:50.675 common/ionic: not in enabled drivers build config 00:01:50.675 common/mvep: not in enabled drivers build config 00:01:50.675 common/octeontx: not in enabled drivers build config 00:01:50.675 bus/auxiliary: not in enabled drivers build config 00:01:50.676 bus/cdx: not in enabled drivers build config 00:01:50.676 bus/dpaa: not in enabled drivers build config 00:01:50.676 bus/fslmc: not in enabled drivers build config 00:01:50.676 bus/ifpga: not in enabled drivers build config 00:01:50.676 bus/platform: not in enabled drivers build config 00:01:50.676 bus/uacce: not in enabled drivers build config 00:01:50.676 bus/vmbus: not in enabled drivers build config 00:01:50.676 common/cnxk: not in enabled drivers build config 00:01:50.676 common/mlx5: not in enabled drivers build config 00:01:50.676 common/nfp: not in enabled drivers build config 00:01:50.676 common/nitrox: not in enabled drivers build config 00:01:50.676 common/qat: not in enabled drivers build config 00:01:50.676 common/sfc_efx: not in enabled drivers build config 00:01:50.676 mempool/bucket: not in enabled drivers build config 00:01:50.676 mempool/cnxk: not in enabled drivers build config 00:01:50.676 mempool/dpaa: not in enabled drivers build config 00:01:50.676 mempool/dpaa2: not in enabled drivers build config 00:01:50.676 mempool/octeontx: not in enabled drivers build config 00:01:50.676 mempool/stack: not in enabled drivers build config 00:01:50.676 dma/cnxk: not in enabled drivers build config 00:01:50.676 dma/dpaa: not in enabled drivers build config 00:01:50.676 dma/dpaa2: not in enabled drivers build config 00:01:50.676 dma/hisilicon: not in enabled drivers build config 00:01:50.676 dma/idxd: not in enabled drivers build config 00:01:50.676 dma/ioat: not in enabled drivers build config 00:01:50.676 dma/skeleton: not in enabled drivers build config 00:01:50.676 net/af_packet: not in enabled drivers build config 00:01:50.676 net/af_xdp: not in enabled drivers build config 00:01:50.676 net/ark: not in enabled drivers build config 00:01:50.676 net/atlantic: not in enabled drivers build config 00:01:50.676 net/avp: not in enabled drivers build config 00:01:50.676 net/axgbe: not in enabled drivers build config 00:01:50.676 net/bnx2x: not in enabled drivers build config 00:01:50.676 net/bnxt: not in enabled drivers build config 00:01:50.676 net/bonding: not in enabled drivers build config 00:01:50.676 net/cnxk: not in enabled drivers build config 00:01:50.676 net/cpfl: not in enabled drivers build config 00:01:50.676 net/cxgbe: not in enabled drivers build config 00:01:50.676 net/dpaa: not in enabled drivers build config 00:01:50.676 net/dpaa2: not in enabled drivers build config 00:01:50.676 net/e1000: not in enabled drivers build config 00:01:50.676 net/ena: not in enabled drivers build config 00:01:50.676 net/enetc: not in enabled drivers build config 00:01:50.676 net/enetfec: not in enabled drivers build config 00:01:50.676 net/enic: not in enabled drivers build config 00:01:50.676 net/failsafe: not in enabled drivers build config 00:01:50.676 net/fm10k: not in enabled drivers build config 00:01:50.676 net/gve: not in enabled drivers build config 00:01:50.676 net/hinic: not in enabled drivers build config 00:01:50.676 net/hns3: not in enabled drivers build config 00:01:50.676 net/i40e: not in enabled drivers build config 00:01:50.676 net/iavf: not in enabled drivers build config 00:01:50.676 net/ice: not in enabled drivers build config 00:01:50.676 net/idpf: not in enabled drivers build config 00:01:50.676 net/igc: not in enabled drivers build config 00:01:50.676 net/ionic: not in enabled drivers build config 00:01:50.676 net/ipn3ke: not in enabled drivers build config 00:01:50.676 net/ixgbe: not in enabled drivers build config 00:01:50.676 net/mana: not in enabled drivers build config 00:01:50.676 net/memif: not in enabled drivers build config 00:01:50.676 net/mlx4: not in enabled drivers build config 00:01:50.676 net/mlx5: not in enabled drivers build config 00:01:50.676 net/mvneta: not in enabled drivers build config 00:01:50.676 net/mvpp2: not in enabled drivers build config 00:01:50.676 net/netvsc: not in enabled drivers build config 00:01:50.676 net/nfb: not in enabled drivers build config 00:01:50.676 net/nfp: not in enabled drivers build config 00:01:50.676 net/ngbe: not in enabled drivers build config 00:01:50.676 net/null: not in enabled drivers build config 00:01:50.676 net/octeontx: not in enabled drivers build config 00:01:50.676 net/octeon_ep: not in enabled drivers build config 00:01:50.676 net/pcap: not in enabled drivers build config 00:01:50.676 net/pfe: not in enabled drivers build config 00:01:50.676 net/qede: not in enabled drivers build config 00:01:50.676 net/ring: not in enabled drivers build config 00:01:50.676 net/sfc: not in enabled drivers build config 00:01:50.676 net/softnic: not in enabled drivers build config 00:01:50.676 net/tap: not in enabled drivers build config 00:01:50.676 net/thunderx: not in enabled drivers build config 00:01:50.676 net/txgbe: not in enabled drivers build config 00:01:50.676 net/vdev_netvsc: not in enabled drivers build config 00:01:50.676 net/vhost: not in enabled drivers build config 00:01:50.676 net/virtio: not in enabled drivers build config 00:01:50.676 net/vmxnet3: not in enabled drivers build config 00:01:50.676 raw/*: missing internal dependency, "rawdev" 00:01:50.676 crypto/armv8: not in enabled drivers build config 00:01:50.676 crypto/bcmfs: not in enabled drivers build config 00:01:50.676 crypto/caam_jr: not in enabled drivers build config 00:01:50.676 crypto/ccp: not in enabled drivers build config 00:01:50.676 crypto/cnxk: not in enabled drivers build config 00:01:50.676 crypto/dpaa_sec: not in enabled drivers build config 00:01:50.676 crypto/dpaa2_sec: not in enabled drivers build config 00:01:50.676 crypto/ipsec_mb: not in enabled drivers build config 00:01:50.676 crypto/mlx5: not in enabled drivers build config 00:01:50.676 crypto/mvsam: not in enabled drivers build config 00:01:50.676 crypto/nitrox: not in enabled drivers build config 00:01:50.676 crypto/null: not in enabled drivers build config 00:01:50.676 crypto/octeontx: not in enabled drivers build config 00:01:50.676 crypto/openssl: not in enabled drivers build config 00:01:50.676 crypto/scheduler: not in enabled drivers build config 00:01:50.676 crypto/uadk: not in enabled drivers build config 00:01:50.676 crypto/virtio: not in enabled drivers build config 00:01:50.676 compress/isal: not in enabled drivers build config 00:01:50.676 compress/mlx5: not in enabled drivers build config 00:01:50.676 compress/nitrox: not in enabled drivers build config 00:01:50.676 compress/octeontx: not in enabled drivers build config 00:01:50.676 compress/zlib: not in enabled drivers build config 00:01:50.676 regex/*: missing internal dependency, "regexdev" 00:01:50.676 ml/*: missing internal dependency, "mldev" 00:01:50.676 vdpa/ifc: not in enabled drivers build config 00:01:50.676 vdpa/mlx5: not in enabled drivers build config 00:01:50.676 vdpa/nfp: not in enabled drivers build config 00:01:50.676 vdpa/sfc: not in enabled drivers build config 00:01:50.676 event/*: missing internal dependency, "eventdev" 00:01:50.676 baseband/*: missing internal dependency, "bbdev" 00:01:50.676 gpu/*: missing internal dependency, "gpudev" 00:01:50.676 00:01:50.676 00:01:50.676 Build targets in project: 84 00:01:50.676 00:01:50.676 DPDK 24.03.0 00:01:50.676 00:01:50.676 User defined options 00:01:50.676 buildtype : debug 00:01:50.676 default_library : shared 00:01:50.676 libdir : lib 00:01:50.676 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:50.676 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:50.676 c_link_args : 00:01:50.676 cpu_instruction_set: native 00:01:50.676 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:50.676 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:50.676 enable_docs : false 00:01:50.676 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:50.676 enable_kmods : false 00:01:50.676 max_lcores : 128 00:01:50.676 tests : false 00:01:50.676 00:01:50.676 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:50.676 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:50.676 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:50.676 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:50.676 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:50.676 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:50.676 [5/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:50.676 [6/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:50.676 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:50.676 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:50.676 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:50.676 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:50.676 [11/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:50.676 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:50.676 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:50.676 [14/267] Linking static target lib/librte_kvargs.a 00:01:50.676 [15/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:50.676 [16/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:50.676 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:50.676 [18/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:50.676 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:50.936 [20/267] Linking static target lib/librte_log.a 00:01:50.936 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:50.936 [22/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:50.936 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:50.936 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:50.936 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:50.936 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:50.936 [27/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:50.936 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:50.936 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:50.936 [30/267] Linking static target lib/librte_pci.a 00:01:50.936 [31/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:50.936 [32/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:50.936 [33/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:50.936 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:50.936 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:50.936 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:50.936 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:50.936 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:51.196 [39/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:51.196 [40/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:51.196 [41/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:51.196 [42/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:51.196 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:51.196 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:51.196 [45/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:51.196 [46/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:51.196 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:51.196 [48/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:51.196 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:51.196 [50/267] Linking static target lib/librte_timer.a 00:01:51.196 [51/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.196 [52/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:51.196 [53/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:51.196 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:51.196 [55/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.196 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:51.196 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:51.196 [58/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:51.196 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:51.196 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:51.196 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:51.196 [62/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:51.196 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:51.196 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:51.196 [65/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:51.196 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:51.196 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:51.196 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:51.196 [69/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:51.196 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:51.196 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:51.196 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:51.196 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:51.196 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:51.196 [75/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:51.196 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:51.196 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:51.196 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:51.196 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:51.196 [80/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:51.196 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:51.196 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:51.196 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:51.196 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:51.196 [85/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:51.196 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:51.196 [87/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:51.196 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:51.196 [89/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:51.196 [90/267] Linking static target lib/librte_meter.a 00:01:51.196 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:51.196 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:51.196 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:51.196 [94/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:51.196 [95/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:51.196 [96/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:51.458 [97/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:51.458 [98/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:51.458 [99/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:51.458 [100/267] Linking static target lib/librte_telemetry.a 00:01:51.458 [101/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:51.458 [102/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:51.458 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:51.458 [104/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:51.458 [105/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:51.458 [106/267] Linking static target lib/librte_ring.a 00:01:51.458 [107/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:51.458 [108/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:51.458 [109/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:51.458 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:51.458 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:51.458 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:51.458 [113/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:51.458 [114/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:51.458 [115/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:51.458 [116/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:51.458 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:51.458 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:51.458 [119/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:51.458 [120/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:51.458 [121/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:51.458 [122/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:51.458 [123/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:51.458 [124/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:51.458 [125/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:51.458 [126/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:51.458 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:51.458 [128/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:51.458 [129/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:51.458 [130/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:51.458 [131/267] Linking static target lib/librte_cmdline.a 00:01:51.458 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:51.458 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:51.458 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:51.458 [135/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:51.458 [136/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:51.458 [137/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:51.458 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:51.458 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:51.458 [140/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:51.458 [141/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:51.458 [142/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:51.458 [143/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:51.458 [144/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:51.458 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:51.458 [146/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:51.458 [147/267] Linking static target lib/librte_dmadev.a 00:01:51.458 [148/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:51.458 [149/267] Linking static target lib/librte_compressdev.a 00:01:51.458 [150/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.458 [151/267] Linking static target lib/librte_rcu.a 00:01:51.458 [152/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:51.458 [153/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:51.458 [154/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:51.458 [155/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:51.458 [156/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:51.458 [157/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:51.458 [158/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:51.458 [159/267] Linking static target lib/librte_mempool.a 00:01:51.458 [160/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:51.458 [161/267] Linking static target lib/librte_net.a 00:01:51.458 [162/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:51.458 [163/267] Linking target lib/librte_log.so.24.1 00:01:51.458 [164/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:51.458 [165/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:51.458 [166/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:51.458 [167/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:51.458 [168/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:51.458 [169/267] Linking static target lib/librte_eal.a 00:01:51.458 [170/267] Linking static target lib/librte_reorder.a 00:01:51.458 [171/267] Linking static target lib/librte_power.a 00:01:51.459 [172/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:51.459 [173/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:51.459 [174/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:51.459 [175/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:51.459 [176/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:51.720 [177/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.720 [178/267] Linking static target lib/librte_mbuf.a 00:01:51.720 [179/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:51.720 [180/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:51.720 [181/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:51.720 [182/267] Linking static target lib/librte_security.a 00:01:51.720 [183/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:51.720 [184/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:51.720 [185/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:51.720 [186/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:51.720 [187/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:51.720 [188/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:51.720 [189/267] Linking static target drivers/librte_bus_vdev.a 00:01:51.720 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:51.720 [191/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:51.720 [192/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.720 [193/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.720 [194/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:51.720 [195/267] Linking static target drivers/librte_bus_pci.a 00:01:51.720 [196/267] Linking target lib/librte_kvargs.so.24.1 00:01:51.720 [197/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:51.720 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:51.720 [199/267] Linking static target lib/librte_hash.a 00:01:51.720 [200/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:51.720 [201/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:51.720 [202/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:51.720 [203/267] Linking static target lib/librte_cryptodev.a 00:01:51.981 [204/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:51.981 [205/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.981 [206/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:51.981 [207/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:51.981 [208/267] Linking static target drivers/librte_mempool_ring.a 00:01:51.981 [209/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.981 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.981 [211/267] Linking target lib/librte_telemetry.so.24.1 00:01:51.981 [212/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.981 [213/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.242 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:52.242 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.242 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.242 [217/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:52.242 [218/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.242 [219/267] Linking static target lib/librte_ethdev.a 00:01:52.504 [220/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:52.504 [221/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.504 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.504 [223/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.504 [224/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.764 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.764 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.341 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:53.341 [228/267] Linking static target lib/librte_vhost.a 00:01:53.916 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.303 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.889 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.833 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.094 [233/267] Linking target lib/librte_eal.so.24.1 00:02:03.094 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:03.094 [235/267] Linking target lib/librte_meter.so.24.1 00:02:03.095 [236/267] Linking target lib/librte_pci.so.24.1 00:02:03.095 [237/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:03.095 [238/267] Linking target lib/librte_ring.so.24.1 00:02:03.095 [239/267] Linking target lib/librte_timer.so.24.1 00:02:03.095 [240/267] Linking target lib/librte_dmadev.so.24.1 00:02:03.356 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:03.356 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:03.356 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:03.356 [244/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:03.356 [245/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:03.356 [246/267] Linking target lib/librte_mempool.so.24.1 00:02:03.356 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:03.356 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:03.617 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:03.617 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:03.617 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:03.617 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:03.617 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:03.878 [254/267] Linking target lib/librte_reorder.so.24.1 00:02:03.878 [255/267] Linking target lib/librte_net.so.24.1 00:02:03.878 [256/267] Linking target lib/librte_compressdev.so.24.1 00:02:03.878 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:03.878 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:03.878 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:03.878 [260/267] Linking target lib/librte_hash.so.24.1 00:02:03.878 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:03.878 [262/267] Linking target lib/librte_security.so.24.1 00:02:03.878 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:04.141 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:04.141 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:04.141 [266/267] Linking target lib/librte_power.so.24.1 00:02:04.141 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:04.141 INFO: autodetecting backend as ninja 00:02:04.141 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:08.346 CC lib/log/log.o 00:02:08.346 CC lib/log/log_flags.o 00:02:08.346 CC lib/ut_mock/mock.o 00:02:08.346 CC lib/log/log_deprecated.o 00:02:08.346 CC lib/ut/ut.o 00:02:08.346 LIB libspdk_ut_mock.a 00:02:08.346 LIB libspdk_log.a 00:02:08.346 LIB libspdk_ut.a 00:02:08.346 SO libspdk_ut_mock.so.6.0 00:02:08.346 SO libspdk_log.so.7.1 00:02:08.346 SO libspdk_ut.so.2.0 00:02:08.346 SYMLINK libspdk_ut_mock.so 00:02:08.346 SYMLINK libspdk_log.so 00:02:08.346 SYMLINK libspdk_ut.so 00:02:08.607 CC lib/dma/dma.o 00:02:08.607 CXX lib/trace_parser/trace.o 00:02:08.607 CC lib/util/base64.o 00:02:08.607 CC lib/util/bit_array.o 00:02:08.607 CC lib/util/cpuset.o 00:02:08.607 CC lib/ioat/ioat.o 00:02:08.607 CC lib/util/crc16.o 00:02:08.607 CC lib/util/crc32.o 00:02:08.607 CC lib/util/crc32c.o 00:02:08.607 CC lib/util/dif.o 00:02:08.607 CC lib/util/crc32_ieee.o 00:02:08.607 CC lib/util/crc64.o 00:02:08.607 CC lib/util/fd.o 00:02:08.607 CC lib/util/fd_group.o 00:02:08.607 CC lib/util/file.o 00:02:08.607 CC lib/util/hexlify.o 00:02:08.607 CC lib/util/iov.o 00:02:08.607 CC lib/util/math.o 00:02:08.607 CC lib/util/net.o 00:02:08.607 CC lib/util/pipe.o 00:02:08.607 CC lib/util/strerror_tls.o 00:02:08.607 CC lib/util/string.o 00:02:08.607 CC lib/util/uuid.o 00:02:08.607 CC lib/util/xor.o 00:02:08.607 CC lib/util/zipf.o 00:02:08.607 CC lib/util/md5.o 00:02:08.865 CC lib/vfio_user/host/vfio_user_pci.o 00:02:08.865 CC lib/vfio_user/host/vfio_user.o 00:02:08.865 LIB libspdk_dma.a 00:02:08.865 SO libspdk_dma.so.5.0 00:02:08.865 LIB libspdk_ioat.a 00:02:08.865 SYMLINK libspdk_dma.so 00:02:08.865 SO libspdk_ioat.so.7.0 00:02:09.126 SYMLINK libspdk_ioat.so 00:02:09.126 LIB libspdk_vfio_user.a 00:02:09.126 SO libspdk_vfio_user.so.5.0 00:02:09.126 SYMLINK libspdk_vfio_user.so 00:02:09.126 LIB libspdk_util.a 00:02:09.126 SO libspdk_util.so.10.1 00:02:09.387 SYMLINK libspdk_util.so 00:02:09.387 LIB libspdk_trace_parser.a 00:02:09.387 SO libspdk_trace_parser.so.6.0 00:02:09.649 SYMLINK libspdk_trace_parser.so 00:02:09.649 CC lib/env_dpdk/env.o 00:02:09.649 CC lib/json/json_parse.o 00:02:09.649 CC lib/env_dpdk/memory.o 00:02:09.649 CC lib/json/json_util.o 00:02:09.649 CC lib/idxd/idxd.o 00:02:09.649 CC lib/env_dpdk/pci.o 00:02:09.649 CC lib/json/json_write.o 00:02:09.649 CC lib/idxd/idxd_user.o 00:02:09.649 CC lib/env_dpdk/init.o 00:02:09.649 CC lib/idxd/idxd_kernel.o 00:02:09.649 CC lib/env_dpdk/threads.o 00:02:09.649 CC lib/env_dpdk/pci_ioat.o 00:02:09.649 CC lib/env_dpdk/pci_virtio.o 00:02:09.649 CC lib/env_dpdk/pci_vmd.o 00:02:09.649 CC lib/env_dpdk/pci_idxd.o 00:02:09.649 CC lib/rdma_utils/rdma_utils.o 00:02:09.649 CC lib/vmd/vmd.o 00:02:09.649 CC lib/env_dpdk/pci_event.o 00:02:09.649 CC lib/vmd/led.o 00:02:09.649 CC lib/env_dpdk/sigbus_handler.o 00:02:09.649 CC lib/env_dpdk/pci_dpdk.o 00:02:09.649 CC lib/conf/conf.o 00:02:09.649 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:09.649 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:09.911 LIB libspdk_conf.a 00:02:09.911 SO libspdk_conf.so.6.0 00:02:09.911 LIB libspdk_json.a 00:02:10.173 LIB libspdk_rdma_utils.a 00:02:10.173 SO libspdk_json.so.6.0 00:02:10.173 SO libspdk_rdma_utils.so.1.0 00:02:10.173 SYMLINK libspdk_conf.so 00:02:10.173 SYMLINK libspdk_json.so 00:02:10.173 SYMLINK libspdk_rdma_utils.so 00:02:10.173 LIB libspdk_vmd.a 00:02:10.173 SO libspdk_vmd.so.6.0 00:02:10.173 LIB libspdk_idxd.a 00:02:10.434 SYMLINK libspdk_vmd.so 00:02:10.434 SO libspdk_idxd.so.12.1 00:02:10.434 SYMLINK libspdk_idxd.so 00:02:10.434 CC lib/jsonrpc/jsonrpc_server.o 00:02:10.434 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:10.434 CC lib/jsonrpc/jsonrpc_client.o 00:02:10.434 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:10.434 CC lib/rdma_provider/common.o 00:02:10.434 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:10.696 LIB libspdk_rdma_provider.a 00:02:10.696 LIB libspdk_jsonrpc.a 00:02:10.696 SO libspdk_rdma_provider.so.7.0 00:02:10.696 SO libspdk_jsonrpc.so.6.0 00:02:10.958 SYMLINK libspdk_rdma_provider.so 00:02:10.958 SYMLINK libspdk_jsonrpc.so 00:02:10.958 LIB libspdk_env_dpdk.a 00:02:10.958 SO libspdk_env_dpdk.so.15.1 00:02:11.220 SYMLINK libspdk_env_dpdk.so 00:02:11.220 CC lib/rpc/rpc.o 00:02:11.480 LIB libspdk_rpc.a 00:02:11.480 SO libspdk_rpc.so.6.0 00:02:11.481 SYMLINK libspdk_rpc.so 00:02:12.051 CC lib/trace/trace.o 00:02:12.051 CC lib/trace/trace_flags.o 00:02:12.051 CC lib/trace/trace_rpc.o 00:02:12.051 CC lib/notify/notify.o 00:02:12.051 CC lib/notify/notify_rpc.o 00:02:12.051 CC lib/keyring/keyring.o 00:02:12.051 CC lib/keyring/keyring_rpc.o 00:02:12.051 LIB libspdk_notify.a 00:02:12.051 SO libspdk_notify.so.6.0 00:02:12.051 LIB libspdk_trace.a 00:02:12.051 LIB libspdk_keyring.a 00:02:12.051 SO libspdk_trace.so.11.0 00:02:12.312 SO libspdk_keyring.so.2.0 00:02:12.312 SYMLINK libspdk_notify.so 00:02:12.312 SYMLINK libspdk_trace.so 00:02:12.312 SYMLINK libspdk_keyring.so 00:02:12.573 CC lib/sock/sock.o 00:02:12.573 CC lib/sock/sock_rpc.o 00:02:12.573 CC lib/thread/thread.o 00:02:12.573 CC lib/thread/iobuf.o 00:02:13.145 LIB libspdk_sock.a 00:02:13.145 SO libspdk_sock.so.10.0 00:02:13.145 SYMLINK libspdk_sock.so 00:02:13.406 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:13.406 CC lib/nvme/nvme_ctrlr.o 00:02:13.406 CC lib/nvme/nvme_fabric.o 00:02:13.406 CC lib/nvme/nvme_ns_cmd.o 00:02:13.406 CC lib/nvme/nvme_ns.o 00:02:13.406 CC lib/nvme/nvme_pcie_common.o 00:02:13.406 CC lib/nvme/nvme_pcie.o 00:02:13.406 CC lib/nvme/nvme_qpair.o 00:02:13.406 CC lib/nvme/nvme.o 00:02:13.406 CC lib/nvme/nvme_quirks.o 00:02:13.406 CC lib/nvme/nvme_transport.o 00:02:13.406 CC lib/nvme/nvme_discovery.o 00:02:13.406 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:13.406 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:13.406 CC lib/nvme/nvme_tcp.o 00:02:13.406 CC lib/nvme/nvme_opal.o 00:02:13.406 CC lib/nvme/nvme_io_msg.o 00:02:13.406 CC lib/nvme/nvme_poll_group.o 00:02:13.406 CC lib/nvme/nvme_zns.o 00:02:13.406 CC lib/nvme/nvme_stubs.o 00:02:13.406 CC lib/nvme/nvme_auth.o 00:02:13.406 CC lib/nvme/nvme_cuse.o 00:02:13.406 CC lib/nvme/nvme_vfio_user.o 00:02:13.406 CC lib/nvme/nvme_rdma.o 00:02:13.977 LIB libspdk_thread.a 00:02:13.977 SO libspdk_thread.so.11.0 00:02:13.977 SYMLINK libspdk_thread.so 00:02:14.238 CC lib/accel/accel.o 00:02:14.238 CC lib/accel/accel_rpc.o 00:02:14.238 CC lib/accel/accel_sw.o 00:02:14.238 CC lib/fsdev/fsdev_io.o 00:02:14.238 CC lib/fsdev/fsdev.o 00:02:14.238 CC lib/fsdev/fsdev_rpc.o 00:02:14.238 CC lib/virtio/virtio.o 00:02:14.238 CC lib/vfu_tgt/tgt_endpoint.o 00:02:14.238 CC lib/blob/blobstore.o 00:02:14.238 CC lib/virtio/virtio_vhost_user.o 00:02:14.238 CC lib/vfu_tgt/tgt_rpc.o 00:02:14.238 CC lib/virtio/virtio_vfio_user.o 00:02:14.238 CC lib/blob/request.o 00:02:14.238 CC lib/virtio/virtio_pci.o 00:02:14.238 CC lib/blob/zeroes.o 00:02:14.238 CC lib/blob/blob_bs_dev.o 00:02:14.238 CC lib/init/json_config.o 00:02:14.238 CC lib/init/subsystem.o 00:02:14.238 CC lib/init/subsystem_rpc.o 00:02:14.238 CC lib/init/rpc.o 00:02:14.811 LIB libspdk_init.a 00:02:14.811 SO libspdk_init.so.6.0 00:02:14.811 LIB libspdk_virtio.a 00:02:14.811 LIB libspdk_vfu_tgt.a 00:02:14.812 SO libspdk_vfu_tgt.so.3.0 00:02:14.812 SO libspdk_virtio.so.7.0 00:02:14.812 SYMLINK libspdk_init.so 00:02:14.812 SYMLINK libspdk_vfu_tgt.so 00:02:14.812 SYMLINK libspdk_virtio.so 00:02:14.812 LIB libspdk_fsdev.a 00:02:15.073 SO libspdk_fsdev.so.2.0 00:02:15.073 SYMLINK libspdk_fsdev.so 00:02:15.073 CC lib/event/app.o 00:02:15.073 CC lib/event/reactor.o 00:02:15.073 CC lib/event/log_rpc.o 00:02:15.073 CC lib/event/app_rpc.o 00:02:15.073 CC lib/event/scheduler_static.o 00:02:15.336 LIB libspdk_accel.a 00:02:15.336 SO libspdk_accel.so.16.0 00:02:15.336 LIB libspdk_nvme.a 00:02:15.336 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:15.336 SYMLINK libspdk_accel.so 00:02:15.597 SO libspdk_nvme.so.15.0 00:02:15.597 LIB libspdk_event.a 00:02:15.597 SO libspdk_event.so.14.0 00:02:15.597 SYMLINK libspdk_event.so 00:02:15.859 CC lib/bdev/bdev.o 00:02:15.859 CC lib/bdev/part.o 00:02:15.859 CC lib/bdev/bdev_rpc.o 00:02:15.859 CC lib/bdev/bdev_zone.o 00:02:15.859 CC lib/bdev/scsi_nvme.o 00:02:15.859 SYMLINK libspdk_nvme.so 00:02:16.121 LIB libspdk_fuse_dispatcher.a 00:02:16.121 SO libspdk_fuse_dispatcher.so.1.0 00:02:16.121 SYMLINK libspdk_fuse_dispatcher.so 00:02:17.066 LIB libspdk_blob.a 00:02:17.066 SO libspdk_blob.so.12.0 00:02:17.066 SYMLINK libspdk_blob.so 00:02:17.335 CC lib/blobfs/blobfs.o 00:02:17.335 CC lib/lvol/lvol.o 00:02:17.335 CC lib/blobfs/tree.o 00:02:18.280 LIB libspdk_blobfs.a 00:02:18.280 LIB libspdk_bdev.a 00:02:18.280 SO libspdk_blobfs.so.11.0 00:02:18.280 SO libspdk_bdev.so.17.0 00:02:18.280 LIB libspdk_lvol.a 00:02:18.280 SYMLINK libspdk_blobfs.so 00:02:18.280 SO libspdk_lvol.so.11.0 00:02:18.280 SYMLINK libspdk_bdev.so 00:02:18.280 SYMLINK libspdk_lvol.so 00:02:18.540 CC lib/nvmf/ctrlr.o 00:02:18.540 CC lib/nvmf/ctrlr_discovery.o 00:02:18.540 CC lib/nvmf/ctrlr_bdev.o 00:02:18.540 CC lib/nvmf/subsystem.o 00:02:18.540 CC lib/nvmf/nvmf.o 00:02:18.540 CC lib/nvmf/nvmf_rpc.o 00:02:18.540 CC lib/nvmf/tcp.o 00:02:18.540 CC lib/nvmf/transport.o 00:02:18.540 CC lib/nvmf/stubs.o 00:02:18.540 CC lib/nvmf/mdns_server.o 00:02:18.540 CC lib/nvmf/vfio_user.o 00:02:18.540 CC lib/nvmf/rdma.o 00:02:18.540 CC lib/nvmf/auth.o 00:02:18.540 CC lib/scsi/lun.o 00:02:18.540 CC lib/scsi/dev.o 00:02:18.540 CC lib/nbd/nbd.o 00:02:18.540 CC lib/ublk/ublk.o 00:02:18.540 CC lib/scsi/port.o 00:02:18.540 CC lib/nbd/nbd_rpc.o 00:02:18.540 CC lib/ublk/ublk_rpc.o 00:02:18.540 CC lib/scsi/scsi.o 00:02:18.540 CC lib/scsi/scsi_bdev.o 00:02:18.540 CC lib/scsi/scsi_pr.o 00:02:18.540 CC lib/scsi/scsi_rpc.o 00:02:18.540 CC lib/scsi/task.o 00:02:18.540 CC lib/ftl/ftl_core.o 00:02:18.540 CC lib/ftl/ftl_init.o 00:02:18.540 CC lib/ftl/ftl_layout.o 00:02:18.540 CC lib/ftl/ftl_debug.o 00:02:18.540 CC lib/ftl/ftl_io.o 00:02:18.540 CC lib/ftl/ftl_sb.o 00:02:18.540 CC lib/ftl/ftl_l2p.o 00:02:18.540 CC lib/ftl/ftl_l2p_flat.o 00:02:18.540 CC lib/ftl/ftl_nv_cache.o 00:02:18.540 CC lib/ftl/ftl_band.o 00:02:18.540 CC lib/ftl/ftl_band_ops.o 00:02:18.540 CC lib/ftl/ftl_writer.o 00:02:18.540 CC lib/ftl/ftl_rq.o 00:02:18.801 CC lib/ftl/ftl_reloc.o 00:02:18.801 CC lib/ftl/ftl_l2p_cache.o 00:02:18.801 CC lib/ftl/ftl_p2l.o 00:02:18.801 CC lib/ftl/ftl_p2l_log.o 00:02:18.801 CC lib/ftl/mngt/ftl_mngt.o 00:02:18.801 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:18.801 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:18.801 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:18.801 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:18.801 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:18.801 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:18.801 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:18.801 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:18.801 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:18.801 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:18.801 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:18.801 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:18.801 CC lib/ftl/utils/ftl_md.o 00:02:18.801 CC lib/ftl/utils/ftl_conf.o 00:02:18.801 CC lib/ftl/utils/ftl_mempool.o 00:02:18.801 CC lib/ftl/utils/ftl_bitmap.o 00:02:18.801 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:18.801 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:18.801 CC lib/ftl/utils/ftl_property.o 00:02:18.801 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:18.801 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:18.801 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:18.801 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:18.801 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:18.801 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:18.801 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:18.801 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:18.801 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:18.801 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:18.801 CC lib/ftl/base/ftl_base_dev.o 00:02:18.801 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:18.801 CC lib/ftl/ftl_trace.o 00:02:18.801 CC lib/ftl/base/ftl_base_bdev.o 00:02:19.372 LIB libspdk_nbd.a 00:02:19.372 SO libspdk_nbd.so.7.0 00:02:19.372 LIB libspdk_scsi.a 00:02:19.372 SYMLINK libspdk_nbd.so 00:02:19.372 SO libspdk_scsi.so.9.0 00:02:19.372 LIB libspdk_ublk.a 00:02:19.372 SO libspdk_ublk.so.3.0 00:02:19.372 SYMLINK libspdk_scsi.so 00:02:19.372 SYMLINK libspdk_ublk.so 00:02:19.634 LIB libspdk_ftl.a 00:02:19.896 CC lib/iscsi/conn.o 00:02:19.896 CC lib/iscsi/init_grp.o 00:02:19.896 CC lib/iscsi/iscsi.o 00:02:19.896 CC lib/iscsi/param.o 00:02:19.896 CC lib/vhost/vhost.o 00:02:19.896 CC lib/iscsi/portal_grp.o 00:02:19.896 CC lib/vhost/vhost_rpc.o 00:02:19.896 CC lib/iscsi/tgt_node.o 00:02:19.896 CC lib/vhost/vhost_scsi.o 00:02:19.896 CC lib/iscsi/iscsi_subsystem.o 00:02:19.896 CC lib/vhost/vhost_blk.o 00:02:19.896 CC lib/iscsi/iscsi_rpc.o 00:02:19.896 CC lib/vhost/rte_vhost_user.o 00:02:19.896 CC lib/iscsi/task.o 00:02:19.896 SO libspdk_ftl.so.9.0 00:02:20.157 SYMLINK libspdk_ftl.so 00:02:20.418 LIB libspdk_nvmf.a 00:02:20.680 SO libspdk_nvmf.so.20.0 00:02:20.680 LIB libspdk_vhost.a 00:02:20.680 SO libspdk_vhost.so.8.0 00:02:20.680 SYMLINK libspdk_nvmf.so 00:02:20.942 SYMLINK libspdk_vhost.so 00:02:20.942 LIB libspdk_iscsi.a 00:02:20.942 SO libspdk_iscsi.so.8.0 00:02:21.204 SYMLINK libspdk_iscsi.so 00:02:21.778 CC module/env_dpdk/env_dpdk_rpc.o 00:02:21.778 CC module/vfu_device/vfu_virtio.o 00:02:21.778 CC module/vfu_device/vfu_virtio_blk.o 00:02:21.778 CC module/vfu_device/vfu_virtio_scsi.o 00:02:21.778 CC module/vfu_device/vfu_virtio_rpc.o 00:02:21.779 CC module/vfu_device/vfu_virtio_fs.o 00:02:21.779 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:21.779 CC module/keyring/linux/keyring.o 00:02:21.779 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:21.779 CC module/keyring/linux/keyring_rpc.o 00:02:21.779 LIB libspdk_env_dpdk_rpc.a 00:02:21.779 CC module/accel/error/accel_error.o 00:02:21.779 CC module/scheduler/gscheduler/gscheduler.o 00:02:21.779 CC module/accel/error/accel_error_rpc.o 00:02:21.779 CC module/blob/bdev/blob_bdev.o 00:02:21.779 CC module/accel/ioat/accel_ioat.o 00:02:21.779 CC module/accel/ioat/accel_ioat_rpc.o 00:02:21.779 CC module/sock/posix/posix.o 00:02:21.779 CC module/accel/dsa/accel_dsa.o 00:02:21.779 CC module/accel/dsa/accel_dsa_rpc.o 00:02:21.779 CC module/accel/iaa/accel_iaa.o 00:02:21.779 CC module/keyring/file/keyring.o 00:02:21.779 CC module/accel/iaa/accel_iaa_rpc.o 00:02:21.779 CC module/keyring/file/keyring_rpc.o 00:02:21.779 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:21.779 CC module/fsdev/aio/fsdev_aio.o 00:02:22.041 CC module/fsdev/aio/linux_aio_mgr.o 00:02:22.041 SO libspdk_env_dpdk_rpc.so.6.0 00:02:22.041 SYMLINK libspdk_env_dpdk_rpc.so 00:02:22.041 LIB libspdk_scheduler_dpdk_governor.a 00:02:22.041 LIB libspdk_keyring_linux.a 00:02:22.041 LIB libspdk_scheduler_gscheduler.a 00:02:22.041 LIB libspdk_keyring_file.a 00:02:22.041 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:22.041 SO libspdk_scheduler_gscheduler.so.4.0 00:02:22.041 SO libspdk_keyring_linux.so.1.0 00:02:22.041 SO libspdk_keyring_file.so.2.0 00:02:22.041 LIB libspdk_accel_ioat.a 00:02:22.041 LIB libspdk_scheduler_dynamic.a 00:02:22.041 LIB libspdk_accel_error.a 00:02:22.041 LIB libspdk_accel_iaa.a 00:02:22.041 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:22.303 SYMLINK libspdk_scheduler_gscheduler.so 00:02:22.303 SO libspdk_scheduler_dynamic.so.4.0 00:02:22.303 SYMLINK libspdk_keyring_file.so 00:02:22.303 SO libspdk_accel_ioat.so.6.0 00:02:22.303 SYMLINK libspdk_keyring_linux.so 00:02:22.303 SO libspdk_accel_error.so.2.0 00:02:22.303 SO libspdk_accel_iaa.so.3.0 00:02:22.303 LIB libspdk_blob_bdev.a 00:02:22.303 LIB libspdk_accel_dsa.a 00:02:22.303 SO libspdk_blob_bdev.so.12.0 00:02:22.303 SYMLINK libspdk_scheduler_dynamic.so 00:02:22.303 SO libspdk_accel_dsa.so.5.0 00:02:22.303 SYMLINK libspdk_accel_ioat.so 00:02:22.303 SYMLINK libspdk_accel_error.so 00:02:22.303 SYMLINK libspdk_accel_iaa.so 00:02:22.303 LIB libspdk_vfu_device.a 00:02:22.303 SYMLINK libspdk_blob_bdev.so 00:02:22.303 SYMLINK libspdk_accel_dsa.so 00:02:22.303 SO libspdk_vfu_device.so.3.0 00:02:22.565 SYMLINK libspdk_vfu_device.so 00:02:22.565 LIB libspdk_fsdev_aio.a 00:02:22.565 LIB libspdk_sock_posix.a 00:02:22.565 SO libspdk_fsdev_aio.so.1.0 00:02:22.565 SO libspdk_sock_posix.so.6.0 00:02:22.824 SYMLINK libspdk_fsdev_aio.so 00:02:22.824 SYMLINK libspdk_sock_posix.so 00:02:22.824 CC module/blobfs/bdev/blobfs_bdev.o 00:02:22.824 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:22.824 CC module/bdev/malloc/bdev_malloc.o 00:02:22.824 CC module/bdev/null/bdev_null.o 00:02:22.824 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:22.824 CC module/bdev/null/bdev_null_rpc.o 00:02:22.824 CC module/bdev/lvol/vbdev_lvol.o 00:02:22.824 CC module/bdev/gpt/vbdev_gpt.o 00:02:22.824 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:22.824 CC module/bdev/gpt/gpt.o 00:02:22.824 CC module/bdev/nvme/bdev_nvme.o 00:02:22.824 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:22.824 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:22.824 CC module/bdev/nvme/nvme_rpc.o 00:02:22.824 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:22.824 CC module/bdev/nvme/bdev_mdns_client.o 00:02:22.824 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:22.824 CC module/bdev/error/vbdev_error.o 00:02:22.824 CC module/bdev/nvme/vbdev_opal.o 00:02:22.824 CC module/bdev/split/vbdev_split.o 00:02:22.824 CC module/bdev/split/vbdev_split_rpc.o 00:02:22.824 CC module/bdev/error/vbdev_error_rpc.o 00:02:22.824 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:22.824 CC module/bdev/delay/vbdev_delay.o 00:02:22.824 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:22.824 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:22.824 CC module/bdev/raid/bdev_raid.o 00:02:22.824 CC module/bdev/raid/bdev_raid_sb.o 00:02:22.824 CC module/bdev/raid/bdev_raid_rpc.o 00:02:22.824 CC module/bdev/aio/bdev_aio.o 00:02:22.824 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:22.824 CC module/bdev/raid/raid0.o 00:02:22.824 CC module/bdev/aio/bdev_aio_rpc.o 00:02:22.824 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:22.824 CC module/bdev/raid/raid1.o 00:02:22.824 CC module/bdev/ftl/bdev_ftl.o 00:02:22.824 CC module/bdev/raid/concat.o 00:02:22.824 CC module/bdev/passthru/vbdev_passthru.o 00:02:22.824 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:22.824 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:22.824 CC module/bdev/iscsi/bdev_iscsi.o 00:02:22.824 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:23.084 LIB libspdk_blobfs_bdev.a 00:02:23.084 SO libspdk_blobfs_bdev.so.6.0 00:02:23.084 LIB libspdk_bdev_split.a 00:02:23.084 LIB libspdk_bdev_null.a 00:02:23.084 SO libspdk_bdev_split.so.6.0 00:02:23.084 LIB libspdk_bdev_error.a 00:02:23.084 SO libspdk_bdev_null.so.6.0 00:02:23.356 SYMLINK libspdk_blobfs_bdev.so 00:02:23.356 LIB libspdk_bdev_gpt.a 00:02:23.356 SO libspdk_bdev_error.so.6.0 00:02:23.356 LIB libspdk_bdev_ftl.a 00:02:23.356 SYMLINK libspdk_bdev_null.so 00:02:23.356 SO libspdk_bdev_gpt.so.6.0 00:02:23.356 SYMLINK libspdk_bdev_split.so 00:02:23.356 LIB libspdk_bdev_passthru.a 00:02:23.356 LIB libspdk_bdev_aio.a 00:02:23.356 SO libspdk_bdev_ftl.so.6.0 00:02:23.356 LIB libspdk_bdev_zone_block.a 00:02:23.356 LIB libspdk_bdev_malloc.a 00:02:23.356 SO libspdk_bdev_passthru.so.6.0 00:02:23.356 SYMLINK libspdk_bdev_error.so 00:02:23.356 SO libspdk_bdev_aio.so.6.0 00:02:23.356 LIB libspdk_bdev_iscsi.a 00:02:23.356 SYMLINK libspdk_bdev_gpt.so 00:02:23.356 SO libspdk_bdev_zone_block.so.6.0 00:02:23.356 SO libspdk_bdev_malloc.so.6.0 00:02:23.356 LIB libspdk_bdev_delay.a 00:02:23.356 SYMLINK libspdk_bdev_ftl.so 00:02:23.356 SO libspdk_bdev_iscsi.so.6.0 00:02:23.356 SYMLINK libspdk_bdev_passthru.so 00:02:23.356 SO libspdk_bdev_delay.so.6.0 00:02:23.356 SYMLINK libspdk_bdev_aio.so 00:02:23.356 SYMLINK libspdk_bdev_zone_block.so 00:02:23.356 SYMLINK libspdk_bdev_malloc.so 00:02:23.356 LIB libspdk_bdev_lvol.a 00:02:23.356 SYMLINK libspdk_bdev_iscsi.so 00:02:23.356 LIB libspdk_bdev_virtio.a 00:02:23.356 SO libspdk_bdev_lvol.so.6.0 00:02:23.356 SYMLINK libspdk_bdev_delay.so 00:02:23.622 SO libspdk_bdev_virtio.so.6.0 00:02:23.622 SYMLINK libspdk_bdev_lvol.so 00:02:23.622 SYMLINK libspdk_bdev_virtio.so 00:02:23.883 LIB libspdk_bdev_raid.a 00:02:23.883 SO libspdk_bdev_raid.so.6.0 00:02:24.144 SYMLINK libspdk_bdev_raid.so 00:02:25.531 LIB libspdk_bdev_nvme.a 00:02:25.531 SO libspdk_bdev_nvme.so.7.1 00:02:25.531 SYMLINK libspdk_bdev_nvme.so 00:02:26.105 CC module/event/subsystems/iobuf/iobuf.o 00:02:26.105 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:26.105 CC module/event/subsystems/vmd/vmd.o 00:02:26.105 CC module/event/subsystems/keyring/keyring.o 00:02:26.105 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:26.105 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:26.105 CC module/event/subsystems/sock/sock.o 00:02:26.105 CC module/event/subsystems/fsdev/fsdev.o 00:02:26.105 CC module/event/subsystems/scheduler/scheduler.o 00:02:26.105 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:26.366 LIB libspdk_event_sock.a 00:02:26.366 LIB libspdk_event_scheduler.a 00:02:26.366 LIB libspdk_event_keyring.a 00:02:26.366 LIB libspdk_event_vhost_blk.a 00:02:26.366 LIB libspdk_event_iobuf.a 00:02:26.366 LIB libspdk_event_vmd.a 00:02:26.366 LIB libspdk_event_fsdev.a 00:02:26.366 LIB libspdk_event_vfu_tgt.a 00:02:26.366 SO libspdk_event_scheduler.so.4.0 00:02:26.366 SO libspdk_event_sock.so.5.0 00:02:26.366 SO libspdk_event_keyring.so.1.0 00:02:26.366 SO libspdk_event_iobuf.so.3.0 00:02:26.366 SO libspdk_event_vhost_blk.so.3.0 00:02:26.366 SO libspdk_event_fsdev.so.1.0 00:02:26.366 SO libspdk_event_vmd.so.6.0 00:02:26.366 SO libspdk_event_vfu_tgt.so.3.0 00:02:26.366 SYMLINK libspdk_event_scheduler.so 00:02:26.366 SYMLINK libspdk_event_keyring.so 00:02:26.366 SYMLINK libspdk_event_iobuf.so 00:02:26.366 SYMLINK libspdk_event_sock.so 00:02:26.366 SYMLINK libspdk_event_vmd.so 00:02:26.366 SYMLINK libspdk_event_vhost_blk.so 00:02:26.366 SYMLINK libspdk_event_fsdev.so 00:02:26.366 SYMLINK libspdk_event_vfu_tgt.so 00:02:26.627 CC module/event/subsystems/accel/accel.o 00:02:26.888 LIB libspdk_event_accel.a 00:02:26.888 SO libspdk_event_accel.so.6.0 00:02:27.149 SYMLINK libspdk_event_accel.so 00:02:27.416 CC module/event/subsystems/bdev/bdev.o 00:02:27.679 LIB libspdk_event_bdev.a 00:02:27.679 SO libspdk_event_bdev.so.6.0 00:02:27.679 SYMLINK libspdk_event_bdev.so 00:02:27.940 CC module/event/subsystems/scsi/scsi.o 00:02:27.940 CC module/event/subsystems/nbd/nbd.o 00:02:27.940 CC module/event/subsystems/ublk/ublk.o 00:02:27.940 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:27.940 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:28.201 LIB libspdk_event_ublk.a 00:02:28.201 LIB libspdk_event_nbd.a 00:02:28.201 LIB libspdk_event_scsi.a 00:02:28.201 SO libspdk_event_ublk.so.3.0 00:02:28.201 SO libspdk_event_nbd.so.6.0 00:02:28.201 SO libspdk_event_scsi.so.6.0 00:02:28.201 LIB libspdk_event_nvmf.a 00:02:28.201 SYMLINK libspdk_event_ublk.so 00:02:28.201 SYMLINK libspdk_event_nbd.so 00:02:28.201 SO libspdk_event_nvmf.so.6.0 00:02:28.202 SYMLINK libspdk_event_scsi.so 00:02:28.463 SYMLINK libspdk_event_nvmf.so 00:02:28.724 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:28.725 CC module/event/subsystems/iscsi/iscsi.o 00:02:28.725 LIB libspdk_event_vhost_scsi.a 00:02:28.986 LIB libspdk_event_iscsi.a 00:02:28.986 SO libspdk_event_vhost_scsi.so.3.0 00:02:28.986 SO libspdk_event_iscsi.so.6.0 00:02:28.986 SYMLINK libspdk_event_vhost_scsi.so 00:02:28.986 SYMLINK libspdk_event_iscsi.so 00:02:29.247 SO libspdk.so.6.0 00:02:29.247 SYMLINK libspdk.so 00:02:29.509 CC app/trace_record/trace_record.o 00:02:29.509 CXX app/trace/trace.o 00:02:29.509 CC app/spdk_nvme_identify/identify.o 00:02:29.509 TEST_HEADER include/spdk/accel.h 00:02:29.509 CC app/spdk_lspci/spdk_lspci.o 00:02:29.509 TEST_HEADER include/spdk/accel_module.h 00:02:29.509 TEST_HEADER include/spdk/assert.h 00:02:29.509 TEST_HEADER include/spdk/barrier.h 00:02:29.509 TEST_HEADER include/spdk/base64.h 00:02:29.509 CC app/iscsi_tgt/iscsi_tgt.o 00:02:29.509 TEST_HEADER include/spdk/bdev.h 00:02:29.509 CC app/spdk_nvme_perf/perf.o 00:02:29.509 TEST_HEADER include/spdk/bdev_module.h 00:02:29.509 CC test/rpc_client/rpc_client_test.o 00:02:29.509 CC app/spdk_top/spdk_top.o 00:02:29.509 TEST_HEADER include/spdk/bit_array.h 00:02:29.509 TEST_HEADER include/spdk/bdev_zone.h 00:02:29.509 CC app/spdk_nvme_discover/discovery_aer.o 00:02:29.509 TEST_HEADER include/spdk/bit_pool.h 00:02:29.509 TEST_HEADER include/spdk/blob_bdev.h 00:02:29.509 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:29.509 TEST_HEADER include/spdk/blob.h 00:02:29.509 TEST_HEADER include/spdk/blobfs.h 00:02:29.509 TEST_HEADER include/spdk/config.h 00:02:29.509 TEST_HEADER include/spdk/conf.h 00:02:29.509 TEST_HEADER include/spdk/cpuset.h 00:02:29.509 TEST_HEADER include/spdk/crc32.h 00:02:29.509 TEST_HEADER include/spdk/crc16.h 00:02:29.510 TEST_HEADER include/spdk/crc64.h 00:02:29.510 TEST_HEADER include/spdk/dif.h 00:02:29.510 TEST_HEADER include/spdk/dma.h 00:02:29.510 TEST_HEADER include/spdk/endian.h 00:02:29.510 TEST_HEADER include/spdk/env_dpdk.h 00:02:29.510 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:29.510 TEST_HEADER include/spdk/env.h 00:02:29.510 TEST_HEADER include/spdk/event.h 00:02:29.510 TEST_HEADER include/spdk/fd_group.h 00:02:29.510 TEST_HEADER include/spdk/fd.h 00:02:29.510 TEST_HEADER include/spdk/file.h 00:02:29.510 TEST_HEADER include/spdk/fsdev.h 00:02:29.510 TEST_HEADER include/spdk/fsdev_module.h 00:02:29.510 TEST_HEADER include/spdk/ftl.h 00:02:29.510 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:29.510 TEST_HEADER include/spdk/gpt_spec.h 00:02:29.510 TEST_HEADER include/spdk/histogram_data.h 00:02:29.510 TEST_HEADER include/spdk/hexlify.h 00:02:29.510 TEST_HEADER include/spdk/idxd_spec.h 00:02:29.510 TEST_HEADER include/spdk/idxd.h 00:02:29.510 TEST_HEADER include/spdk/init.h 00:02:29.510 TEST_HEADER include/spdk/ioat.h 00:02:29.510 TEST_HEADER include/spdk/ioat_spec.h 00:02:29.510 CC app/spdk_dd/spdk_dd.o 00:02:29.510 TEST_HEADER include/spdk/iscsi_spec.h 00:02:29.510 TEST_HEADER include/spdk/json.h 00:02:29.510 TEST_HEADER include/spdk/jsonrpc.h 00:02:29.510 TEST_HEADER include/spdk/keyring.h 00:02:29.510 TEST_HEADER include/spdk/keyring_module.h 00:02:29.510 TEST_HEADER include/spdk/likely.h 00:02:29.510 TEST_HEADER include/spdk/log.h 00:02:29.510 CC app/nvmf_tgt/nvmf_main.o 00:02:29.510 TEST_HEADER include/spdk/lvol.h 00:02:29.510 TEST_HEADER include/spdk/md5.h 00:02:29.510 TEST_HEADER include/spdk/memory.h 00:02:29.510 CC app/spdk_tgt/spdk_tgt.o 00:02:29.510 TEST_HEADER include/spdk/mmio.h 00:02:29.510 TEST_HEADER include/spdk/nbd.h 00:02:29.774 TEST_HEADER include/spdk/net.h 00:02:29.775 TEST_HEADER include/spdk/notify.h 00:02:29.775 TEST_HEADER include/spdk/nvme.h 00:02:29.775 TEST_HEADER include/spdk/nvme_intel.h 00:02:29.775 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:29.775 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:29.775 TEST_HEADER include/spdk/nvme_zns.h 00:02:29.775 TEST_HEADER include/spdk/nvme_spec.h 00:02:29.775 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:29.775 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:29.775 TEST_HEADER include/spdk/nvmf.h 00:02:29.775 TEST_HEADER include/spdk/nvmf_spec.h 00:02:29.775 TEST_HEADER include/spdk/opal.h 00:02:29.775 TEST_HEADER include/spdk/nvmf_transport.h 00:02:29.775 TEST_HEADER include/spdk/opal_spec.h 00:02:29.775 TEST_HEADER include/spdk/pci_ids.h 00:02:29.775 TEST_HEADER include/spdk/pipe.h 00:02:29.775 TEST_HEADER include/spdk/reduce.h 00:02:29.775 TEST_HEADER include/spdk/queue.h 00:02:29.775 TEST_HEADER include/spdk/rpc.h 00:02:29.775 TEST_HEADER include/spdk/scheduler.h 00:02:29.775 TEST_HEADER include/spdk/scsi.h 00:02:29.775 TEST_HEADER include/spdk/sock.h 00:02:29.775 TEST_HEADER include/spdk/scsi_spec.h 00:02:29.775 TEST_HEADER include/spdk/stdinc.h 00:02:29.775 TEST_HEADER include/spdk/thread.h 00:02:29.775 TEST_HEADER include/spdk/trace.h 00:02:29.775 TEST_HEADER include/spdk/string.h 00:02:29.775 TEST_HEADER include/spdk/ublk.h 00:02:29.775 TEST_HEADER include/spdk/tree.h 00:02:29.775 TEST_HEADER include/spdk/trace_parser.h 00:02:29.775 TEST_HEADER include/spdk/util.h 00:02:29.775 TEST_HEADER include/spdk/uuid.h 00:02:29.775 TEST_HEADER include/spdk/version.h 00:02:29.775 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:29.775 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:29.775 TEST_HEADER include/spdk/vmd.h 00:02:29.775 TEST_HEADER include/spdk/vhost.h 00:02:29.775 TEST_HEADER include/spdk/xor.h 00:02:29.775 TEST_HEADER include/spdk/zipf.h 00:02:29.775 CXX test/cpp_headers/accel.o 00:02:29.775 CXX test/cpp_headers/accel_module.o 00:02:29.775 CXX test/cpp_headers/assert.o 00:02:29.775 CXX test/cpp_headers/barrier.o 00:02:29.775 CXX test/cpp_headers/base64.o 00:02:29.775 CXX test/cpp_headers/bdev_zone.o 00:02:29.775 CXX test/cpp_headers/bdev.o 00:02:29.775 CXX test/cpp_headers/bdev_module.o 00:02:29.775 CXX test/cpp_headers/bit_array.o 00:02:29.775 CXX test/cpp_headers/bit_pool.o 00:02:29.775 CXX test/cpp_headers/blobfs.o 00:02:29.775 CXX test/cpp_headers/blob.o 00:02:29.775 CXX test/cpp_headers/blob_bdev.o 00:02:29.775 CXX test/cpp_headers/blobfs_bdev.o 00:02:29.775 CXX test/cpp_headers/conf.o 00:02:29.775 CXX test/cpp_headers/config.o 00:02:29.775 CXX test/cpp_headers/cpuset.o 00:02:29.775 CXX test/cpp_headers/crc16.o 00:02:29.775 CXX test/cpp_headers/crc64.o 00:02:29.775 CXX test/cpp_headers/crc32.o 00:02:29.775 CXX test/cpp_headers/dma.o 00:02:29.775 CXX test/cpp_headers/dif.o 00:02:29.775 CXX test/cpp_headers/endian.o 00:02:29.775 CXX test/cpp_headers/env.o 00:02:29.775 CXX test/cpp_headers/env_dpdk.o 00:02:29.775 CXX test/cpp_headers/event.o 00:02:29.775 CXX test/cpp_headers/fd.o 00:02:29.775 CXX test/cpp_headers/fd_group.o 00:02:29.775 CXX test/cpp_headers/file.o 00:02:29.775 CXX test/cpp_headers/fsdev_module.o 00:02:29.775 CXX test/cpp_headers/fsdev.o 00:02:29.775 CXX test/cpp_headers/ftl.o 00:02:29.775 CXX test/cpp_headers/gpt_spec.o 00:02:29.775 CXX test/cpp_headers/fuse_dispatcher.o 00:02:29.775 CXX test/cpp_headers/hexlify.o 00:02:29.775 CXX test/cpp_headers/histogram_data.o 00:02:29.775 CXX test/cpp_headers/idxd_spec.o 00:02:29.775 CXX test/cpp_headers/init.o 00:02:29.775 CXX test/cpp_headers/ioat.o 00:02:29.776 CXX test/cpp_headers/idxd.o 00:02:29.776 CXX test/cpp_headers/iscsi_spec.o 00:02:29.776 CXX test/cpp_headers/ioat_spec.o 00:02:29.776 CXX test/cpp_headers/jsonrpc.o 00:02:29.776 CXX test/cpp_headers/json.o 00:02:29.776 CXX test/cpp_headers/keyring_module.o 00:02:29.776 CXX test/cpp_headers/keyring.o 00:02:29.776 CXX test/cpp_headers/likely.o 00:02:29.776 CXX test/cpp_headers/log.o 00:02:29.776 CXX test/cpp_headers/md5.o 00:02:29.776 CXX test/cpp_headers/nbd.o 00:02:29.776 CXX test/cpp_headers/lvol.o 00:02:29.776 CXX test/cpp_headers/mmio.o 00:02:29.776 CXX test/cpp_headers/notify.o 00:02:29.776 CXX test/cpp_headers/memory.o 00:02:29.776 CXX test/cpp_headers/net.o 00:02:29.776 CXX test/cpp_headers/nvme.o 00:02:29.776 CXX test/cpp_headers/nvme_intel.o 00:02:29.776 CXX test/cpp_headers/nvme_ocssd.o 00:02:29.776 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:29.776 CXX test/cpp_headers/nvme_zns.o 00:02:29.776 CXX test/cpp_headers/nvmf_cmd.o 00:02:29.776 CXX test/cpp_headers/nvme_spec.o 00:02:29.776 CXX test/cpp_headers/nvmf.o 00:02:29.776 CXX test/cpp_headers/nvmf_spec.o 00:02:29.776 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:29.776 CXX test/cpp_headers/nvmf_transport.o 00:02:29.776 CXX test/cpp_headers/opal.o 00:02:29.776 CXX test/cpp_headers/pipe.o 00:02:29.776 CXX test/cpp_headers/pci_ids.o 00:02:29.776 CXX test/cpp_headers/opal_spec.o 00:02:29.776 CC examples/util/zipf/zipf.o 00:02:29.776 CC test/thread/poller_perf/poller_perf.o 00:02:29.776 CXX test/cpp_headers/queue.o 00:02:29.776 CC test/app/stub/stub.o 00:02:29.776 CC examples/ioat/perf/perf.o 00:02:29.776 CC examples/ioat/verify/verify.o 00:02:29.776 CC test/app/jsoncat/jsoncat.o 00:02:29.776 CXX test/cpp_headers/reduce.o 00:02:29.776 CXX test/cpp_headers/sock.o 00:02:29.776 CXX test/cpp_headers/rpc.o 00:02:29.776 CXX test/cpp_headers/scsi.o 00:02:29.776 CXX test/cpp_headers/scsi_spec.o 00:02:29.776 CXX test/cpp_headers/stdinc.o 00:02:29.776 CXX test/cpp_headers/scheduler.o 00:02:29.776 CXX test/cpp_headers/string.o 00:02:29.776 CC test/env/vtophys/vtophys.o 00:02:29.776 CXX test/cpp_headers/trace.o 00:02:29.776 CXX test/cpp_headers/thread.o 00:02:29.776 CC app/fio/nvme/fio_plugin.o 00:02:29.776 CXX test/cpp_headers/ublk.o 00:02:29.776 CXX test/cpp_headers/trace_parser.o 00:02:29.776 CC test/app/histogram_perf/histogram_perf.o 00:02:29.776 LINK spdk_lspci 00:02:29.776 CXX test/cpp_headers/util.o 00:02:29.776 CXX test/cpp_headers/tree.o 00:02:29.776 CXX test/cpp_headers/uuid.o 00:02:29.776 CC test/env/pci/pci_ut.o 00:02:29.776 CC test/env/memory/memory_ut.o 00:02:29.776 CXX test/cpp_headers/version.o 00:02:29.776 CXX test/cpp_headers/vfio_user_pci.o 00:02:29.776 CXX test/cpp_headers/vfio_user_spec.o 00:02:29.776 CXX test/cpp_headers/xor.o 00:02:29.776 CXX test/cpp_headers/vmd.o 00:02:29.776 CXX test/cpp_headers/vhost.o 00:02:29.776 CXX test/cpp_headers/zipf.o 00:02:29.776 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:30.041 LINK iscsi_tgt 00:02:30.041 CC test/app/bdev_svc/bdev_svc.o 00:02:30.041 CC test/dma/test_dma/test_dma.o 00:02:30.041 CC app/fio/bdev/fio_plugin.o 00:02:30.041 LINK interrupt_tgt 00:02:30.041 LINK nvmf_tgt 00:02:30.041 LINK spdk_nvme_discover 00:02:30.041 LINK rpc_client_test 00:02:30.041 LINK spdk_tgt 00:02:30.302 LINK spdk_trace_record 00:02:30.302 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:30.302 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:30.302 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:30.302 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:30.302 CC test/env/mem_callbacks/mem_callbacks.o 00:02:30.302 LINK spdk_trace 00:02:30.563 LINK zipf 00:02:30.563 LINK jsoncat 00:02:30.563 LINK spdk_dd 00:02:30.563 LINK poller_perf 00:02:30.563 LINK histogram_perf 00:02:30.563 LINK vtophys 00:02:30.563 LINK env_dpdk_post_init 00:02:30.563 LINK verify 00:02:30.563 LINK stub 00:02:30.563 LINK ioat_perf 00:02:30.823 LINK bdev_svc 00:02:30.823 CC app/vhost/vhost.o 00:02:30.823 LINK test_dma 00:02:30.823 LINK pci_ut 00:02:30.823 CC examples/sock/hello_world/hello_sock.o 00:02:30.823 LINK spdk_nvme 00:02:30.823 CC examples/idxd/perf/perf.o 00:02:30.823 CC examples/vmd/led/led.o 00:02:30.823 CC examples/vmd/lsvmd/lsvmd.o 00:02:30.823 LINK spdk_bdev 00:02:31.085 LINK vhost_fuzz 00:02:31.085 LINK nvme_fuzz 00:02:31.085 CC examples/thread/thread/thread_ex.o 00:02:31.085 LINK spdk_nvme_perf 00:02:31.085 LINK vhost 00:02:31.085 CC test/event/reactor/reactor.o 00:02:31.085 LINK spdk_nvme_identify 00:02:31.085 CC test/event/reactor_perf/reactor_perf.o 00:02:31.085 CC test/event/event_perf/event_perf.o 00:02:31.085 CC test/event/app_repeat/app_repeat.o 00:02:31.085 CC test/event/scheduler/scheduler.o 00:02:31.085 LINK mem_callbacks 00:02:31.085 LINK lsvmd 00:02:31.085 LINK led 00:02:31.085 LINK spdk_top 00:02:31.085 LINK hello_sock 00:02:31.392 LINK reactor 00:02:31.392 LINK idxd_perf 00:02:31.392 LINK event_perf 00:02:31.392 LINK reactor_perf 00:02:31.392 LINK app_repeat 00:02:31.392 LINK thread 00:02:31.392 CC test/nvme/aer/aer.o 00:02:31.392 CC test/nvme/reserve/reserve.o 00:02:31.392 LINK scheduler 00:02:31.392 CC test/nvme/fused_ordering/fused_ordering.o 00:02:31.392 CC test/nvme/overhead/overhead.o 00:02:31.392 CC test/nvme/compliance/nvme_compliance.o 00:02:31.392 CC test/nvme/boot_partition/boot_partition.o 00:02:31.392 CC test/nvme/e2edp/nvme_dp.o 00:02:31.392 CC test/nvme/startup/startup.o 00:02:31.392 CC test/nvme/connect_stress/connect_stress.o 00:02:31.392 CC test/nvme/sgl/sgl.o 00:02:31.392 CC test/nvme/err_injection/err_injection.o 00:02:31.392 CC test/nvme/simple_copy/simple_copy.o 00:02:31.392 CC test/nvme/reset/reset.o 00:02:31.392 CC test/nvme/fdp/fdp.o 00:02:31.392 CC test/nvme/cuse/cuse.o 00:02:31.392 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:31.392 CC test/blobfs/mkfs/mkfs.o 00:02:31.392 CC test/accel/dif/dif.o 00:02:31.654 LINK memory_ut 00:02:31.654 CC test/lvol/esnap/esnap.o 00:02:31.654 LINK startup 00:02:31.654 LINK reserve 00:02:31.654 LINK boot_partition 00:02:31.654 LINK connect_stress 00:02:31.654 LINK aer 00:02:31.654 LINK err_injection 00:02:31.654 LINK fused_ordering 00:02:31.654 LINK doorbell_aers 00:02:31.654 LINK mkfs 00:02:31.654 LINK simple_copy 00:02:31.654 CC examples/nvme/hello_world/hello_world.o 00:02:31.654 CC examples/nvme/abort/abort.o 00:02:31.654 CC examples/nvme/hotplug/hotplug.o 00:02:31.654 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:31.654 LINK overhead 00:02:31.654 CC examples/nvme/arbitration/arbitration.o 00:02:31.654 CC examples/nvme/reconnect/reconnect.o 00:02:31.654 LINK sgl 00:02:31.654 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:31.654 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:31.654 LINK reset 00:02:31.654 LINK nvme_dp 00:02:31.654 LINK nvme_compliance 00:02:31.654 LINK fdp 00:02:31.916 CC examples/accel/perf/accel_perf.o 00:02:31.916 CC examples/blob/cli/blobcli.o 00:02:31.916 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:31.916 LINK pmr_persistence 00:02:31.916 CC examples/blob/hello_world/hello_blob.o 00:02:31.916 LINK cmb_copy 00:02:31.916 LINK hello_world 00:02:31.916 LINK iscsi_fuzz 00:02:31.916 LINK hotplug 00:02:31.916 LINK reconnect 00:02:31.916 LINK dif 00:02:31.916 LINK arbitration 00:02:32.179 LINK abort 00:02:32.179 LINK nvme_manage 00:02:32.179 LINK hello_fsdev 00:02:32.179 LINK hello_blob 00:02:32.441 LINK accel_perf 00:02:32.441 LINK blobcli 00:02:32.703 LINK cuse 00:02:32.703 CC test/bdev/bdevio/bdevio.o 00:02:32.965 CC examples/bdev/hello_world/hello_bdev.o 00:02:32.965 CC examples/bdev/bdevperf/bdevperf.o 00:02:32.965 LINK bdevio 00:02:33.227 LINK hello_bdev 00:02:33.802 LINK bdevperf 00:02:34.376 CC examples/nvmf/nvmf/nvmf.o 00:02:34.637 LINK nvmf 00:02:36.025 LINK esnap 00:02:36.598 00:02:36.598 real 0m55.342s 00:02:36.598 user 7m52.856s 00:02:36.598 sys 4m51.770s 00:02:36.598 11:37:44 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:36.598 11:37:44 make -- common/autotest_common.sh@10 -- $ set +x 00:02:36.598 ************************************ 00:02:36.598 END TEST make 00:02:36.598 ************************************ 00:02:36.598 11:37:44 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:36.599 11:37:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:36.599 11:37:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:36.599 11:37:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.599 11:37:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:36.599 11:37:44 -- pm/common@44 -- $ pid=3912446 00:02:36.599 11:37:44 -- pm/common@50 -- $ kill -TERM 3912446 00:02:36.599 11:37:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.599 11:37:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:36.599 11:37:44 -- pm/common@44 -- $ pid=3912447 00:02:36.599 11:37:44 -- pm/common@50 -- $ kill -TERM 3912447 00:02:36.599 11:37:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.599 11:37:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:36.599 11:37:44 -- pm/common@44 -- $ pid=3912449 00:02:36.599 11:37:44 -- pm/common@50 -- $ kill -TERM 3912449 00:02:36.599 11:37:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.599 11:37:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:36.599 11:37:44 -- pm/common@44 -- $ pid=3912473 00:02:36.599 11:37:44 -- pm/common@50 -- $ sudo -E kill -TERM 3912473 00:02:36.599 11:37:44 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:36.599 11:37:44 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:36.599 11:37:44 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:36.599 11:37:44 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:36.599 11:37:44 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:36.599 11:37:44 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:36.599 11:37:44 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:36.599 11:37:44 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:36.599 11:37:44 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:36.599 11:37:44 -- scripts/common.sh@336 -- # IFS=.-: 00:02:36.599 11:37:44 -- scripts/common.sh@336 -- # read -ra ver1 00:02:36.599 11:37:44 -- scripts/common.sh@337 -- # IFS=.-: 00:02:36.599 11:37:44 -- scripts/common.sh@337 -- # read -ra ver2 00:02:36.599 11:37:44 -- scripts/common.sh@338 -- # local 'op=<' 00:02:36.599 11:37:44 -- scripts/common.sh@340 -- # ver1_l=2 00:02:36.599 11:37:44 -- scripts/common.sh@341 -- # ver2_l=1 00:02:36.599 11:37:44 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:36.599 11:37:44 -- scripts/common.sh@344 -- # case "$op" in 00:02:36.599 11:37:44 -- scripts/common.sh@345 -- # : 1 00:02:36.599 11:37:44 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:36.599 11:37:44 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:36.599 11:37:44 -- scripts/common.sh@365 -- # decimal 1 00:02:36.599 11:37:44 -- scripts/common.sh@353 -- # local d=1 00:02:36.599 11:37:44 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:36.599 11:37:44 -- scripts/common.sh@355 -- # echo 1 00:02:36.599 11:37:44 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:36.599 11:37:44 -- scripts/common.sh@366 -- # decimal 2 00:02:36.599 11:37:44 -- scripts/common.sh@353 -- # local d=2 00:02:36.599 11:37:44 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:36.599 11:37:44 -- scripts/common.sh@355 -- # echo 2 00:02:36.599 11:37:44 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:36.599 11:37:44 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:36.599 11:37:44 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:36.599 11:37:44 -- scripts/common.sh@368 -- # return 0 00:02:36.599 11:37:44 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:36.599 11:37:44 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:36.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:36.599 --rc genhtml_branch_coverage=1 00:02:36.599 --rc genhtml_function_coverage=1 00:02:36.599 --rc genhtml_legend=1 00:02:36.599 --rc geninfo_all_blocks=1 00:02:36.599 --rc geninfo_unexecuted_blocks=1 00:02:36.599 00:02:36.599 ' 00:02:36.599 11:37:44 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:36.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:36.599 --rc genhtml_branch_coverage=1 00:02:36.599 --rc genhtml_function_coverage=1 00:02:36.599 --rc genhtml_legend=1 00:02:36.599 --rc geninfo_all_blocks=1 00:02:36.599 --rc geninfo_unexecuted_blocks=1 00:02:36.599 00:02:36.599 ' 00:02:36.599 11:37:44 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:36.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:36.599 --rc genhtml_branch_coverage=1 00:02:36.599 --rc genhtml_function_coverage=1 00:02:36.599 --rc genhtml_legend=1 00:02:36.599 --rc geninfo_all_blocks=1 00:02:36.599 --rc geninfo_unexecuted_blocks=1 00:02:36.599 00:02:36.599 ' 00:02:36.599 11:37:44 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:36.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:36.599 --rc genhtml_branch_coverage=1 00:02:36.599 --rc genhtml_function_coverage=1 00:02:36.599 --rc genhtml_legend=1 00:02:36.599 --rc geninfo_all_blocks=1 00:02:36.599 --rc geninfo_unexecuted_blocks=1 00:02:36.599 00:02:36.599 ' 00:02:36.599 11:37:44 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:36.599 11:37:44 -- nvmf/common.sh@7 -- # uname -s 00:02:36.599 11:37:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:36.599 11:37:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:36.599 11:37:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:36.599 11:37:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:36.861 11:37:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:36.861 11:37:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:36.861 11:37:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:36.861 11:37:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:36.861 11:37:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:36.861 11:37:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:36.861 11:37:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:36.861 11:37:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:36.861 11:37:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:36.861 11:37:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:36.861 11:37:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:36.861 11:37:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:36.861 11:37:44 -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:02:36.861 11:37:44 -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:36.861 11:37:44 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:36.861 11:37:44 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:36.861 11:37:44 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:36.861 11:37:44 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:36.861 11:37:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.861 11:37:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.861 11:37:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.861 11:37:44 -- paths/export.sh@5 -- # export PATH 00:02:36.862 11:37:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.862 11:37:44 -- nvmf/common.sh@52 -- # : 0 00:02:36.862 11:37:44 -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:02:36.862 11:37:44 -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:02:36.862 11:37:44 -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:02:36.862 11:37:44 -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:36.862 11:37:44 -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:36.862 11:37:44 -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:02:36.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:02:36.862 11:37:44 -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:02:36.862 11:37:44 -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:02:36.862 11:37:44 -- nvmf/common.sh@56 -- # have_pci_nics=0 00:02:36.862 11:37:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:36.862 11:37:44 -- spdk/autotest.sh@32 -- # uname -s 00:02:36.862 11:37:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:36.862 11:37:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:36.862 11:37:44 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:36.862 11:37:44 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:36.862 11:37:44 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:36.862 11:37:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:36.862 11:37:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:36.862 11:37:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:36.862 11:37:44 -- spdk/autotest.sh@48 -- # udevadm_pid=3978016 00:02:36.862 11:37:44 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:36.862 11:37:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:36.862 11:37:44 -- pm/common@17 -- # local monitor 00:02:36.862 11:37:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.862 11:37:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.862 11:37:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.862 11:37:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.862 11:37:44 -- pm/common@21 -- # date +%s 00:02:36.862 11:37:44 -- pm/common@21 -- # date +%s 00:02:36.862 11:37:44 -- pm/common@25 -- # sleep 1 00:02:36.862 11:37:44 -- pm/common@21 -- # date +%s 00:02:36.862 11:37:44 -- pm/common@21 -- # date +%s 00:02:36.862 11:37:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733740664 00:02:36.862 11:37:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733740664 00:02:36.862 11:37:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733740664 00:02:36.862 11:37:44 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733740664 00:02:36.862 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733740664_collect-cpu-load.pm.log 00:02:36.862 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733740664_collect-vmstat.pm.log 00:02:36.862 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733740664_collect-cpu-temp.pm.log 00:02:36.862 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733740664_collect-bmc-pm.bmc.pm.log 00:02:37.804 11:37:45 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:37.804 11:37:45 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:37.804 11:37:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:37.804 11:37:45 -- common/autotest_common.sh@10 -- # set +x 00:02:37.804 11:37:45 -- spdk/autotest.sh@59 -- # create_test_list 00:02:37.804 11:37:45 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:37.804 11:37:45 -- common/autotest_common.sh@10 -- # set +x 00:02:37.804 11:37:45 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:37.805 11:37:45 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:37.805 11:37:45 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:37.805 11:37:45 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:37.805 11:37:45 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:37.805 11:37:45 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:37.805 11:37:45 -- common/autotest_common.sh@1457 -- # uname 00:02:37.805 11:37:45 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:37.805 11:37:45 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:37.805 11:37:45 -- common/autotest_common.sh@1477 -- # uname 00:02:37.805 11:37:45 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:37.805 11:37:45 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:37.805 11:37:45 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:38.066 lcov: LCOV version 1.15 00:02:38.066 11:37:45 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:52.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:52.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:11.093 11:38:16 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:11.093 11:38:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:11.093 11:38:16 -- common/autotest_common.sh@10 -- # set +x 00:03:11.093 11:38:16 -- spdk/autotest.sh@78 -- # rm -f 00:03:11.093 11:38:16 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:12.034 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:12.034 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:12.034 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:12.034 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:12.034 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:12.034 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:12.034 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:12.294 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:12.294 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:12.294 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:12.294 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:12.294 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:12.294 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:12.294 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:12.294 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:12.294 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:12.294 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:12.554 11:38:20 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:12.554 11:38:20 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:12.554 11:38:20 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:12.554 11:38:20 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:12.554 11:38:20 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:12.554 11:38:20 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:12.554 11:38:20 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:12.554 11:38:20 -- common/autotest_common.sh@1669 -- # bdf=0000:65:00.0 00:03:12.554 11:38:20 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:12.554 11:38:20 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:12.554 11:38:20 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:12.554 11:38:20 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:12.554 11:38:20 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:12.554 11:38:20 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:12.554 11:38:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:12.554 11:38:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:12.554 11:38:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:12.554 11:38:20 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:12.554 11:38:20 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:12.814 No valid GPT data, bailing 00:03:12.814 11:38:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:12.814 11:38:20 -- scripts/common.sh@394 -- # pt= 00:03:12.814 11:38:20 -- scripts/common.sh@395 -- # return 1 00:03:12.814 11:38:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:12.814 1+0 records in 00:03:12.814 1+0 records out 00:03:12.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00484538 s, 216 MB/s 00:03:12.814 11:38:20 -- spdk/autotest.sh@105 -- # sync 00:03:12.814 11:38:20 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:12.814 11:38:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:12.814 11:38:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:20.956 11:38:28 -- spdk/autotest.sh@111 -- # uname -s 00:03:20.956 11:38:28 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:20.956 11:38:28 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:20.956 11:38:28 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:24.256 Hugepages 00:03:24.256 node hugesize free / total 00:03:24.256 node0 1048576kB 0 / 0 00:03:24.518 node0 2048kB 0 / 0 00:03:24.518 node1 1048576kB 0 / 0 00:03:24.518 node1 2048kB 0 / 0 00:03:24.518 00:03:24.518 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:24.518 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:24.518 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:24.518 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:24.518 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:24.518 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:24.518 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:24.518 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:24.518 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:24.518 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:24.518 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:24.518 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:24.518 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:24.518 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:24.518 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:24.518 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:24.518 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:24.518 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:24.518 11:38:32 -- spdk/autotest.sh@117 -- # uname -s 00:03:24.518 11:38:32 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:24.518 11:38:32 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:24.518 11:38:32 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.824 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:28.085 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:28.085 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:28.085 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:28.085 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:28.085 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:28.085 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:28.085 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:28.085 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:28.085 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:28.085 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:28.085 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:28.085 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:28.085 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:28.085 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:28.085 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:30.002 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:30.263 11:38:38 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:31.206 11:38:39 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:31.206 11:38:39 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:31.206 11:38:39 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:31.206 11:38:39 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:31.206 11:38:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:31.206 11:38:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:31.206 11:38:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:31.206 11:38:39 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:31.206 11:38:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:31.468 11:38:39 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:31.468 11:38:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:31.468 11:38:39 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.774 Waiting for block devices as requested 00:03:34.774 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:34.774 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:35.036 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:35.036 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:35.036 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:35.298 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:35.298 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:35.298 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:35.560 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:35.560 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:35.821 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:35.821 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:35.821 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:36.083 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:36.083 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:36.083 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:36.083 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:36.656 11:38:44 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:36.656 11:38:44 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:36.656 11:38:44 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:36.656 11:38:44 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:03:36.656 11:38:44 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:36.656 11:38:44 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:36.656 11:38:44 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:36.656 11:38:44 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:36.656 11:38:44 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:36.656 11:38:44 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:36.656 11:38:44 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:36.656 11:38:44 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:36.656 11:38:44 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:36.656 11:38:44 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:03:36.656 11:38:44 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:36.656 11:38:44 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:36.656 11:38:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:36.656 11:38:44 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:36.656 11:38:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:36.656 11:38:44 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:36.656 11:38:44 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:36.656 11:38:44 -- common/autotest_common.sh@1543 -- # continue 00:03:36.656 11:38:44 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:36.656 11:38:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:36.656 11:38:44 -- common/autotest_common.sh@10 -- # set +x 00:03:36.656 11:38:44 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:36.656 11:38:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:36.656 11:38:44 -- common/autotest_common.sh@10 -- # set +x 00:03:36.656 11:38:44 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:39.962 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:39.962 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:39.962 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:40.224 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:40.224 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:40.224 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:40.224 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:40.224 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:40.224 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:40.224 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:40.224 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:40.224 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:40.224 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:40.224 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:40.224 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:40.224 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:40.224 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:40.798 11:38:48 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:40.798 11:38:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:40.798 11:38:48 -- common/autotest_common.sh@10 -- # set +x 00:03:40.798 11:38:48 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:40.798 11:38:48 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:40.798 11:38:48 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:40.798 11:38:48 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:40.798 11:38:48 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:40.798 11:38:48 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:40.798 11:38:48 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:40.798 11:38:48 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:40.798 11:38:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:40.798 11:38:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:40.798 11:38:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:40.798 11:38:48 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:40.798 11:38:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:40.798 11:38:48 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:40.798 11:38:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:40.798 11:38:48 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:40.798 11:38:48 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:40.798 11:38:48 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:03:40.798 11:38:48 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:40.798 11:38:48 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:40.798 11:38:48 -- common/autotest_common.sh@1572 -- # return 0 00:03:40.798 11:38:48 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:40.798 11:38:48 -- common/autotest_common.sh@1580 -- # return 0 00:03:40.798 11:38:48 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:40.798 11:38:48 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:40.798 11:38:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:40.798 11:38:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:40.798 11:38:48 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:40.798 11:38:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:40.798 11:38:48 -- common/autotest_common.sh@10 -- # set +x 00:03:40.798 11:38:48 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:40.798 11:38:48 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:40.798 11:38:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:40.798 11:38:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:40.798 11:38:48 -- common/autotest_common.sh@10 -- # set +x 00:03:40.798 ************************************ 00:03:40.798 START TEST env 00:03:40.798 ************************************ 00:03:40.798 11:38:48 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:41.059 * Looking for test storage... 00:03:41.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:41.059 11:38:48 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:41.059 11:38:48 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:41.059 11:38:48 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:41.059 11:38:48 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:41.059 11:38:48 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:41.059 11:38:48 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:41.059 11:38:48 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:41.059 11:38:48 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:41.059 11:38:48 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:41.059 11:38:48 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:41.059 11:38:48 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:41.059 11:38:48 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:41.059 11:38:48 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:41.059 11:38:48 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:41.059 11:38:48 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:41.059 11:38:48 env -- scripts/common.sh@344 -- # case "$op" in 00:03:41.059 11:38:48 env -- scripts/common.sh@345 -- # : 1 00:03:41.059 11:38:48 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:41.059 11:38:48 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:41.059 11:38:48 env -- scripts/common.sh@365 -- # decimal 1 00:03:41.059 11:38:48 env -- scripts/common.sh@353 -- # local d=1 00:03:41.059 11:38:48 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:41.059 11:38:48 env -- scripts/common.sh@355 -- # echo 1 00:03:41.059 11:38:48 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:41.059 11:38:48 env -- scripts/common.sh@366 -- # decimal 2 00:03:41.059 11:38:48 env -- scripts/common.sh@353 -- # local d=2 00:03:41.059 11:38:48 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:41.059 11:38:48 env -- scripts/common.sh@355 -- # echo 2 00:03:41.059 11:38:48 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:41.059 11:38:48 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:41.059 11:38:48 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:41.059 11:38:48 env -- scripts/common.sh@368 -- # return 0 00:03:41.059 11:38:48 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:41.059 11:38:48 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:41.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.059 --rc genhtml_branch_coverage=1 00:03:41.059 --rc genhtml_function_coverage=1 00:03:41.059 --rc genhtml_legend=1 00:03:41.059 --rc geninfo_all_blocks=1 00:03:41.059 --rc geninfo_unexecuted_blocks=1 00:03:41.059 00:03:41.059 ' 00:03:41.059 11:38:48 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:41.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.059 --rc genhtml_branch_coverage=1 00:03:41.059 --rc genhtml_function_coverage=1 00:03:41.059 --rc genhtml_legend=1 00:03:41.059 --rc geninfo_all_blocks=1 00:03:41.059 --rc geninfo_unexecuted_blocks=1 00:03:41.059 00:03:41.059 ' 00:03:41.059 11:38:48 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:41.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.059 --rc genhtml_branch_coverage=1 00:03:41.059 --rc genhtml_function_coverage=1 00:03:41.059 --rc genhtml_legend=1 00:03:41.059 --rc geninfo_all_blocks=1 00:03:41.059 --rc geninfo_unexecuted_blocks=1 00:03:41.059 00:03:41.059 ' 00:03:41.059 11:38:48 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:41.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.059 --rc genhtml_branch_coverage=1 00:03:41.059 --rc genhtml_function_coverage=1 00:03:41.059 --rc genhtml_legend=1 00:03:41.059 --rc geninfo_all_blocks=1 00:03:41.059 --rc geninfo_unexecuted_blocks=1 00:03:41.059 00:03:41.059 ' 00:03:41.059 11:38:48 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:41.059 11:38:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.059 11:38:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.059 11:38:48 env -- common/autotest_common.sh@10 -- # set +x 00:03:41.059 ************************************ 00:03:41.059 START TEST env_memory 00:03:41.059 ************************************ 00:03:41.059 11:38:48 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:41.059 00:03:41.059 00:03:41.059 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.059 http://cunit.sourceforge.net/ 00:03:41.059 00:03:41.059 00:03:41.059 Suite: memory 00:03:41.059 Test: alloc and free memory map ...[2024-12-09 11:38:48.916551] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:41.059 passed 00:03:41.059 Test: mem map translation ...[2024-12-09 11:38:48.942297] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:41.059 [2024-12-09 11:38:48.942325] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:41.059 [2024-12-09 11:38:48.942373] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:41.059 [2024-12-09 11:38:48.942385] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:41.322 passed 00:03:41.322 Test: mem map registration ...[2024-12-09 11:38:48.997754] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:41.322 [2024-12-09 11:38:48.997779] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:41.322 passed 00:03:41.322 Test: mem map adjacent registrations ...passed 00:03:41.322 00:03:41.322 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.322 suites 1 1 n/a 0 0 00:03:41.322 tests 4 4 4 0 0 00:03:41.322 asserts 152 152 152 0 n/a 00:03:41.322 00:03:41.322 Elapsed time = 0.195 seconds 00:03:41.322 00:03:41.322 real 0m0.210s 00:03:41.322 user 0m0.195s 00:03:41.322 sys 0m0.014s 00:03:41.322 11:38:49 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:41.322 11:38:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:41.322 ************************************ 00:03:41.322 END TEST env_memory 00:03:41.322 ************************************ 00:03:41.322 11:38:49 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:41.322 11:38:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.322 11:38:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.322 11:38:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:41.322 ************************************ 00:03:41.322 START TEST env_vtophys 00:03:41.322 ************************************ 00:03:41.322 11:38:49 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:41.322 EAL: lib.eal log level changed from notice to debug 00:03:41.322 EAL: Detected lcore 0 as core 0 on socket 0 00:03:41.322 EAL: Detected lcore 1 as core 1 on socket 0 00:03:41.322 EAL: Detected lcore 2 as core 2 on socket 0 00:03:41.322 EAL: Detected lcore 3 as core 3 on socket 0 00:03:41.322 EAL: Detected lcore 4 as core 4 on socket 0 00:03:41.322 EAL: Detected lcore 5 as core 5 on socket 0 00:03:41.322 EAL: Detected lcore 6 as core 6 on socket 0 00:03:41.322 EAL: Detected lcore 7 as core 7 on socket 0 00:03:41.322 EAL: Detected lcore 8 as core 8 on socket 0 00:03:41.322 EAL: Detected lcore 9 as core 9 on socket 0 00:03:41.322 EAL: Detected lcore 10 as core 10 on socket 0 00:03:41.322 EAL: Detected lcore 11 as core 11 on socket 0 00:03:41.322 EAL: Detected lcore 12 as core 12 on socket 0 00:03:41.322 EAL: Detected lcore 13 as core 13 on socket 0 00:03:41.322 EAL: Detected lcore 14 as core 14 on socket 0 00:03:41.322 EAL: Detected lcore 15 as core 15 on socket 0 00:03:41.322 EAL: Detected lcore 16 as core 16 on socket 0 00:03:41.322 EAL: Detected lcore 17 as core 17 on socket 0 00:03:41.322 EAL: Detected lcore 18 as core 18 on socket 0 00:03:41.322 EAL: Detected lcore 19 as core 19 on socket 0 00:03:41.322 EAL: Detected lcore 20 as core 20 on socket 0 00:03:41.322 EAL: Detected lcore 21 as core 21 on socket 0 00:03:41.322 EAL: Detected lcore 22 as core 22 on socket 0 00:03:41.322 EAL: Detected lcore 23 as core 23 on socket 0 00:03:41.322 EAL: Detected lcore 24 as core 24 on socket 0 00:03:41.322 EAL: Detected lcore 25 as core 25 on socket 0 00:03:41.322 EAL: Detected lcore 26 as core 26 on socket 0 00:03:41.322 EAL: Detected lcore 27 as core 27 on socket 0 00:03:41.322 EAL: Detected lcore 28 as core 28 on socket 0 00:03:41.322 EAL: Detected lcore 29 as core 29 on socket 0 00:03:41.322 EAL: Detected lcore 30 as core 30 on socket 0 00:03:41.322 EAL: Detected lcore 31 as core 31 on socket 0 00:03:41.322 EAL: Detected lcore 32 as core 32 on socket 0 00:03:41.322 EAL: Detected lcore 33 as core 33 on socket 0 00:03:41.322 EAL: Detected lcore 34 as core 34 on socket 0 00:03:41.322 EAL: Detected lcore 35 as core 35 on socket 0 00:03:41.322 EAL: Detected lcore 36 as core 0 on socket 1 00:03:41.322 EAL: Detected lcore 37 as core 1 on socket 1 00:03:41.322 EAL: Detected lcore 38 as core 2 on socket 1 00:03:41.322 EAL: Detected lcore 39 as core 3 on socket 1 00:03:41.322 EAL: Detected lcore 40 as core 4 on socket 1 00:03:41.322 EAL: Detected lcore 41 as core 5 on socket 1 00:03:41.322 EAL: Detected lcore 42 as core 6 on socket 1 00:03:41.322 EAL: Detected lcore 43 as core 7 on socket 1 00:03:41.322 EAL: Detected lcore 44 as core 8 on socket 1 00:03:41.322 EAL: Detected lcore 45 as core 9 on socket 1 00:03:41.322 EAL: Detected lcore 46 as core 10 on socket 1 00:03:41.322 EAL: Detected lcore 47 as core 11 on socket 1 00:03:41.322 EAL: Detected lcore 48 as core 12 on socket 1 00:03:41.322 EAL: Detected lcore 49 as core 13 on socket 1 00:03:41.322 EAL: Detected lcore 50 as core 14 on socket 1 00:03:41.322 EAL: Detected lcore 51 as core 15 on socket 1 00:03:41.322 EAL: Detected lcore 52 as core 16 on socket 1 00:03:41.322 EAL: Detected lcore 53 as core 17 on socket 1 00:03:41.322 EAL: Detected lcore 54 as core 18 on socket 1 00:03:41.322 EAL: Detected lcore 55 as core 19 on socket 1 00:03:41.322 EAL: Detected lcore 56 as core 20 on socket 1 00:03:41.322 EAL: Detected lcore 57 as core 21 on socket 1 00:03:41.322 EAL: Detected lcore 58 as core 22 on socket 1 00:03:41.322 EAL: Detected lcore 59 as core 23 on socket 1 00:03:41.322 EAL: Detected lcore 60 as core 24 on socket 1 00:03:41.322 EAL: Detected lcore 61 as core 25 on socket 1 00:03:41.322 EAL: Detected lcore 62 as core 26 on socket 1 00:03:41.322 EAL: Detected lcore 63 as core 27 on socket 1 00:03:41.322 EAL: Detected lcore 64 as core 28 on socket 1 00:03:41.322 EAL: Detected lcore 65 as core 29 on socket 1 00:03:41.322 EAL: Detected lcore 66 as core 30 on socket 1 00:03:41.322 EAL: Detected lcore 67 as core 31 on socket 1 00:03:41.322 EAL: Detected lcore 68 as core 32 on socket 1 00:03:41.322 EAL: Detected lcore 69 as core 33 on socket 1 00:03:41.322 EAL: Detected lcore 70 as core 34 on socket 1 00:03:41.323 EAL: Detected lcore 71 as core 35 on socket 1 00:03:41.323 EAL: Detected lcore 72 as core 0 on socket 0 00:03:41.323 EAL: Detected lcore 73 as core 1 on socket 0 00:03:41.323 EAL: Detected lcore 74 as core 2 on socket 0 00:03:41.323 EAL: Detected lcore 75 as core 3 on socket 0 00:03:41.323 EAL: Detected lcore 76 as core 4 on socket 0 00:03:41.323 EAL: Detected lcore 77 as core 5 on socket 0 00:03:41.323 EAL: Detected lcore 78 as core 6 on socket 0 00:03:41.323 EAL: Detected lcore 79 as core 7 on socket 0 00:03:41.323 EAL: Detected lcore 80 as core 8 on socket 0 00:03:41.323 EAL: Detected lcore 81 as core 9 on socket 0 00:03:41.323 EAL: Detected lcore 82 as core 10 on socket 0 00:03:41.323 EAL: Detected lcore 83 as core 11 on socket 0 00:03:41.323 EAL: Detected lcore 84 as core 12 on socket 0 00:03:41.323 EAL: Detected lcore 85 as core 13 on socket 0 00:03:41.323 EAL: Detected lcore 86 as core 14 on socket 0 00:03:41.323 EAL: Detected lcore 87 as core 15 on socket 0 00:03:41.323 EAL: Detected lcore 88 as core 16 on socket 0 00:03:41.323 EAL: Detected lcore 89 as core 17 on socket 0 00:03:41.323 EAL: Detected lcore 90 as core 18 on socket 0 00:03:41.323 EAL: Detected lcore 91 as core 19 on socket 0 00:03:41.323 EAL: Detected lcore 92 as core 20 on socket 0 00:03:41.323 EAL: Detected lcore 93 as core 21 on socket 0 00:03:41.323 EAL: Detected lcore 94 as core 22 on socket 0 00:03:41.323 EAL: Detected lcore 95 as core 23 on socket 0 00:03:41.323 EAL: Detected lcore 96 as core 24 on socket 0 00:03:41.323 EAL: Detected lcore 97 as core 25 on socket 0 00:03:41.323 EAL: Detected lcore 98 as core 26 on socket 0 00:03:41.323 EAL: Detected lcore 99 as core 27 on socket 0 00:03:41.323 EAL: Detected lcore 100 as core 28 on socket 0 00:03:41.323 EAL: Detected lcore 101 as core 29 on socket 0 00:03:41.323 EAL: Detected lcore 102 as core 30 on socket 0 00:03:41.323 EAL: Detected lcore 103 as core 31 on socket 0 00:03:41.323 EAL: Detected lcore 104 as core 32 on socket 0 00:03:41.323 EAL: Detected lcore 105 as core 33 on socket 0 00:03:41.323 EAL: Detected lcore 106 as core 34 on socket 0 00:03:41.323 EAL: Detected lcore 107 as core 35 on socket 0 00:03:41.323 EAL: Detected lcore 108 as core 0 on socket 1 00:03:41.323 EAL: Detected lcore 109 as core 1 on socket 1 00:03:41.323 EAL: Detected lcore 110 as core 2 on socket 1 00:03:41.323 EAL: Detected lcore 111 as core 3 on socket 1 00:03:41.323 EAL: Detected lcore 112 as core 4 on socket 1 00:03:41.323 EAL: Detected lcore 113 as core 5 on socket 1 00:03:41.323 EAL: Detected lcore 114 as core 6 on socket 1 00:03:41.323 EAL: Detected lcore 115 as core 7 on socket 1 00:03:41.323 EAL: Detected lcore 116 as core 8 on socket 1 00:03:41.323 EAL: Detected lcore 117 as core 9 on socket 1 00:03:41.323 EAL: Detected lcore 118 as core 10 on socket 1 00:03:41.323 EAL: Detected lcore 119 as core 11 on socket 1 00:03:41.323 EAL: Detected lcore 120 as core 12 on socket 1 00:03:41.323 EAL: Detected lcore 121 as core 13 on socket 1 00:03:41.323 EAL: Detected lcore 122 as core 14 on socket 1 00:03:41.323 EAL: Detected lcore 123 as core 15 on socket 1 00:03:41.323 EAL: Detected lcore 124 as core 16 on socket 1 00:03:41.323 EAL: Detected lcore 125 as core 17 on socket 1 00:03:41.323 EAL: Detected lcore 126 as core 18 on socket 1 00:03:41.323 EAL: Detected lcore 127 as core 19 on socket 1 00:03:41.323 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:41.323 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:41.323 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:41.323 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:41.323 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:41.323 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:41.323 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:41.323 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:41.323 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:41.323 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:41.323 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:41.323 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:41.323 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:41.323 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:41.323 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:41.323 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:41.323 EAL: Maximum logical cores by configuration: 128 00:03:41.323 EAL: Detected CPU lcores: 128 00:03:41.323 EAL: Detected NUMA nodes: 2 00:03:41.323 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:41.323 EAL: Detected shared linkage of DPDK 00:03:41.323 EAL: No shared files mode enabled, IPC will be disabled 00:03:41.323 EAL: Bus pci wants IOVA as 'DC' 00:03:41.323 EAL: Buses did not request a specific IOVA mode. 00:03:41.323 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:41.323 EAL: Selected IOVA mode 'VA' 00:03:41.323 EAL: Probing VFIO support... 00:03:41.323 EAL: IOMMU type 1 (Type 1) is supported 00:03:41.323 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:41.323 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:41.323 EAL: VFIO support initialized 00:03:41.323 EAL: Ask a virtual area of 0x2e000 bytes 00:03:41.323 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:41.323 EAL: Setting up physically contiguous memory... 00:03:41.323 EAL: Setting maximum number of open files to 524288 00:03:41.323 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:41.323 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:41.323 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:41.323 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.323 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:41.323 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:41.323 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.323 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:41.323 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:41.323 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.323 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:41.323 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:41.323 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.323 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:41.323 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:41.323 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.323 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:41.323 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:41.323 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.323 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:41.323 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:41.323 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.323 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:41.323 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:41.323 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.323 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:41.323 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:41.323 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:41.323 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.323 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:41.323 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:41.323 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.323 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:41.323 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:41.323 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.323 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:41.323 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:41.323 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.323 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:41.323 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:41.323 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.323 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:41.323 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:41.323 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.323 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:41.323 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:41.323 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.323 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:41.323 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:41.323 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.323 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:41.323 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:41.323 EAL: Hugepages will be freed exactly as allocated. 00:03:41.323 EAL: No shared files mode enabled, IPC is disabled 00:03:41.323 EAL: No shared files mode enabled, IPC is disabled 00:03:41.323 EAL: TSC frequency is ~2400000 KHz 00:03:41.323 EAL: Main lcore 0 is ready (tid=7fa22b4aea00;cpuset=[0]) 00:03:41.323 EAL: Trying to obtain current memory policy. 00:03:41.323 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.323 EAL: Restoring previous memory policy: 0 00:03:41.323 EAL: request: mp_malloc_sync 00:03:41.323 EAL: No shared files mode enabled, IPC is disabled 00:03:41.323 EAL: Heap on socket 0 was expanded by 2MB 00:03:41.323 EAL: No shared files mode enabled, IPC is disabled 00:03:41.584 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:41.584 EAL: Mem event callback 'spdk:(nil)' registered 00:03:41.584 00:03:41.584 00:03:41.584 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.584 http://cunit.sourceforge.net/ 00:03:41.584 00:03:41.584 00:03:41.584 Suite: components_suite 00:03:41.584 Test: vtophys_malloc_test ...passed 00:03:41.584 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:41.584 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.585 EAL: Restoring previous memory policy: 4 00:03:41.585 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.585 EAL: request: mp_malloc_sync 00:03:41.585 EAL: No shared files mode enabled, IPC is disabled 00:03:41.585 EAL: Heap on socket 0 was expanded by 4MB 00:03:41.585 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.585 EAL: request: mp_malloc_sync 00:03:41.585 EAL: No shared files mode enabled, IPC is disabled 00:03:41.585 EAL: Heap on socket 0 was shrunk by 4MB 00:03:41.585 EAL: Trying to obtain current memory policy. 00:03:41.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.585 EAL: Restoring previous memory policy: 4 00:03:41.585 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.585 EAL: request: mp_malloc_sync 00:03:41.585 EAL: No shared files mode enabled, IPC is disabled 00:03:41.585 EAL: Heap on socket 0 was expanded by 6MB 00:03:41.585 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.585 EAL: request: mp_malloc_sync 00:03:41.585 EAL: No shared files mode enabled, IPC is disabled 00:03:41.585 EAL: Heap on socket 0 was shrunk by 6MB 00:03:41.585 EAL: Trying to obtain current memory policy. 00:03:41.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.585 EAL: Restoring previous memory policy: 4 00:03:41.585 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.585 EAL: request: mp_malloc_sync 00:03:41.585 EAL: No shared files mode enabled, IPC is disabled 00:03:41.585 EAL: Heap on socket 0 was expanded by 10MB 00:03:41.585 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.585 EAL: request: mp_malloc_sync 00:03:41.585 EAL: No shared files mode enabled, IPC is disabled 00:03:41.585 EAL: Heap on socket 0 was shrunk by 10MB 00:03:41.585 EAL: Trying to obtain current memory policy. 00:03:41.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.585 EAL: Restoring previous memory policy: 4 00:03:41.585 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.585 EAL: request: mp_malloc_sync 00:03:41.585 EAL: No shared files mode enabled, IPC is disabled 00:03:41.585 EAL: Heap on socket 0 was expanded by 18MB 00:03:41.585 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.585 EAL: request: mp_malloc_sync 00:03:41.585 EAL: No shared files mode enabled, IPC is disabled 00:03:41.585 EAL: Heap on socket 0 was shrunk by 18MB 00:03:41.585 EAL: Trying to obtain current memory policy. 00:03:41.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.585 EAL: Restoring previous memory policy: 4 00:03:41.585 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.585 EAL: request: mp_malloc_sync 00:03:41.585 EAL: No shared files mode enabled, IPC is disabled 00:03:41.585 EAL: Heap on socket 0 was expanded by 34MB 00:03:41.585 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.585 EAL: request: mp_malloc_sync 00:03:41.585 EAL: No shared files mode enabled, IPC is disabled 00:03:41.585 EAL: Heap on socket 0 was shrunk by 34MB 00:03:41.585 EAL: Trying to obtain current memory policy. 00:03:41.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.585 EAL: Restoring previous memory policy: 4 00:03:41.585 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.585 EAL: request: mp_malloc_sync 00:03:41.585 EAL: No shared files mode enabled, IPC is disabled 00:03:41.585 EAL: Heap on socket 0 was expanded by 66MB 00:03:41.585 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.585 EAL: request: mp_malloc_sync 00:03:41.585 EAL: No shared files mode enabled, IPC is disabled 00:03:41.585 EAL: Heap on socket 0 was shrunk by 66MB 00:03:41.585 EAL: Trying to obtain current memory policy. 00:03:41.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.585 EAL: Restoring previous memory policy: 4 00:03:41.585 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.585 EAL: request: mp_malloc_sync 00:03:41.585 EAL: No shared files mode enabled, IPC is disabled 00:03:41.585 EAL: Heap on socket 0 was expanded by 130MB 00:03:41.585 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.585 EAL: request: mp_malloc_sync 00:03:41.585 EAL: No shared files mode enabled, IPC is disabled 00:03:41.585 EAL: Heap on socket 0 was shrunk by 130MB 00:03:41.585 EAL: Trying to obtain current memory policy. 00:03:41.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.585 EAL: Restoring previous memory policy: 4 00:03:41.585 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.585 EAL: request: mp_malloc_sync 00:03:41.585 EAL: No shared files mode enabled, IPC is disabled 00:03:41.585 EAL: Heap on socket 0 was expanded by 258MB 00:03:41.585 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.585 EAL: request: mp_malloc_sync 00:03:41.585 EAL: No shared files mode enabled, IPC is disabled 00:03:41.585 EAL: Heap on socket 0 was shrunk by 258MB 00:03:41.585 EAL: Trying to obtain current memory policy. 00:03:41.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.845 EAL: Restoring previous memory policy: 4 00:03:41.845 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.845 EAL: request: mp_malloc_sync 00:03:41.845 EAL: No shared files mode enabled, IPC is disabled 00:03:41.845 EAL: Heap on socket 0 was expanded by 514MB 00:03:41.845 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.845 EAL: request: mp_malloc_sync 00:03:41.845 EAL: No shared files mode enabled, IPC is disabled 00:03:41.845 EAL: Heap on socket 0 was shrunk by 514MB 00:03:41.846 EAL: Trying to obtain current memory policy. 00:03:41.846 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.106 EAL: Restoring previous memory policy: 4 00:03:42.106 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.106 EAL: request: mp_malloc_sync 00:03:42.106 EAL: No shared files mode enabled, IPC is disabled 00:03:42.106 EAL: Heap on socket 0 was expanded by 1026MB 00:03:42.106 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.106 EAL: request: mp_malloc_sync 00:03:42.106 EAL: No shared files mode enabled, IPC is disabled 00:03:42.106 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:42.106 passed 00:03:42.106 00:03:42.106 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.106 suites 1 1 n/a 0 0 00:03:42.106 tests 2 2 2 0 0 00:03:42.106 asserts 497 497 497 0 n/a 00:03:42.106 00:03:42.106 Elapsed time = 0.684 seconds 00:03:42.106 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.106 EAL: request: mp_malloc_sync 00:03:42.106 EAL: No shared files mode enabled, IPC is disabled 00:03:42.106 EAL: Heap on socket 0 was shrunk by 2MB 00:03:42.106 EAL: No shared files mode enabled, IPC is disabled 00:03:42.106 EAL: No shared files mode enabled, IPC is disabled 00:03:42.106 EAL: No shared files mode enabled, IPC is disabled 00:03:42.106 00:03:42.106 real 0m0.828s 00:03:42.106 user 0m0.441s 00:03:42.106 sys 0m0.362s 00:03:42.106 11:38:49 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.106 11:38:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:42.106 ************************************ 00:03:42.106 END TEST env_vtophys 00:03:42.106 ************************************ 00:03:42.367 11:38:50 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:42.367 11:38:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.367 11:38:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.367 11:38:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:42.367 ************************************ 00:03:42.367 START TEST env_pci 00:03:42.367 ************************************ 00:03:42.367 11:38:50 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:42.367 00:03:42.367 00:03:42.367 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.367 http://cunit.sourceforge.net/ 00:03:42.367 00:03:42.367 00:03:42.367 Suite: pci 00:03:42.367 Test: pci_hook ...[2024-12-09 11:38:50.073832] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3997364 has claimed it 00:03:42.367 EAL: Cannot find device (10000:00:01.0) 00:03:42.367 EAL: Failed to attach device on primary process 00:03:42.367 passed 00:03:42.368 00:03:42.368 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.368 suites 1 1 n/a 0 0 00:03:42.368 tests 1 1 1 0 0 00:03:42.368 asserts 25 25 25 0 n/a 00:03:42.368 00:03:42.368 Elapsed time = 0.028 seconds 00:03:42.368 00:03:42.368 real 0m0.049s 00:03:42.368 user 0m0.021s 00:03:42.368 sys 0m0.027s 00:03:42.368 11:38:50 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.368 11:38:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:42.368 ************************************ 00:03:42.368 END TEST env_pci 00:03:42.368 ************************************ 00:03:42.368 11:38:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:42.368 11:38:50 env -- env/env.sh@15 -- # uname 00:03:42.368 11:38:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:42.368 11:38:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:42.368 11:38:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:42.368 11:38:50 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:42.368 11:38:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.368 11:38:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:42.368 ************************************ 00:03:42.368 START TEST env_dpdk_post_init 00:03:42.368 ************************************ 00:03:42.368 11:38:50 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:42.368 EAL: Detected CPU lcores: 128 00:03:42.368 EAL: Detected NUMA nodes: 2 00:03:42.368 EAL: Detected shared linkage of DPDK 00:03:42.368 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:42.368 EAL: Selected IOVA mode 'VA' 00:03:42.368 EAL: VFIO support initialized 00:03:42.630 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:42.630 EAL: Using IOMMU type 1 (Type 1) 00:03:42.630 EAL: Ignore mapping IO port bar(1) 00:03:42.891 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:42.891 EAL: Ignore mapping IO port bar(1) 00:03:43.152 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:43.152 EAL: Ignore mapping IO port bar(1) 00:03:43.152 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:43.413 EAL: Ignore mapping IO port bar(1) 00:03:43.413 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:43.675 EAL: Ignore mapping IO port bar(1) 00:03:43.675 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:43.937 EAL: Ignore mapping IO port bar(1) 00:03:43.937 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:43.937 EAL: Ignore mapping IO port bar(1) 00:03:44.197 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:44.198 EAL: Ignore mapping IO port bar(1) 00:03:44.458 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:44.718 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:44.718 EAL: Ignore mapping IO port bar(1) 00:03:44.718 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:44.980 EAL: Ignore mapping IO port bar(1) 00:03:44.980 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:45.241 EAL: Ignore mapping IO port bar(1) 00:03:45.241 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:45.502 EAL: Ignore mapping IO port bar(1) 00:03:45.502 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:45.502 EAL: Ignore mapping IO port bar(1) 00:03:45.763 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:45.763 EAL: Ignore mapping IO port bar(1) 00:03:46.024 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:46.024 EAL: Ignore mapping IO port bar(1) 00:03:46.285 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:46.285 EAL: Ignore mapping IO port bar(1) 00:03:46.285 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:46.285 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:46.285 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:46.546 Starting DPDK initialization... 00:03:46.546 Starting SPDK post initialization... 00:03:46.546 SPDK NVMe probe 00:03:46.546 Attaching to 0000:65:00.0 00:03:46.546 Attached to 0000:65:00.0 00:03:46.546 Cleaning up... 00:03:48.462 00:03:48.462 real 0m5.748s 00:03:48.462 user 0m0.104s 00:03:48.462 sys 0m0.198s 00:03:48.462 11:38:55 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.462 11:38:55 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:48.462 ************************************ 00:03:48.462 END TEST env_dpdk_post_init 00:03:48.462 ************************************ 00:03:48.462 11:38:55 env -- env/env.sh@26 -- # uname 00:03:48.462 11:38:55 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:48.462 11:38:55 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:48.462 11:38:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.462 11:38:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.462 11:38:55 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.462 ************************************ 00:03:48.462 START TEST env_mem_callbacks 00:03:48.462 ************************************ 00:03:48.462 11:38:56 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:48.462 EAL: Detected CPU lcores: 128 00:03:48.462 EAL: Detected NUMA nodes: 2 00:03:48.462 EAL: Detected shared linkage of DPDK 00:03:48.462 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:48.462 EAL: Selected IOVA mode 'VA' 00:03:48.462 EAL: VFIO support initialized 00:03:48.462 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:48.462 00:03:48.462 00:03:48.462 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.462 http://cunit.sourceforge.net/ 00:03:48.462 00:03:48.462 00:03:48.462 Suite: memory 00:03:48.462 Test: test ... 00:03:48.462 register 0x200000200000 2097152 00:03:48.462 malloc 3145728 00:03:48.462 register 0x200000400000 4194304 00:03:48.462 buf 0x200000500000 len 3145728 PASSED 00:03:48.462 malloc 64 00:03:48.462 buf 0x2000004fff40 len 64 PASSED 00:03:48.462 malloc 4194304 00:03:48.462 register 0x200000800000 6291456 00:03:48.462 buf 0x200000a00000 len 4194304 PASSED 00:03:48.462 free 0x200000500000 3145728 00:03:48.462 free 0x2000004fff40 64 00:03:48.462 unregister 0x200000400000 4194304 PASSED 00:03:48.462 free 0x200000a00000 4194304 00:03:48.462 unregister 0x200000800000 6291456 PASSED 00:03:48.462 malloc 8388608 00:03:48.462 register 0x200000400000 10485760 00:03:48.462 buf 0x200000600000 len 8388608 PASSED 00:03:48.462 free 0x200000600000 8388608 00:03:48.462 unregister 0x200000400000 10485760 PASSED 00:03:48.462 passed 00:03:48.462 00:03:48.462 Run Summary: Type Total Ran Passed Failed Inactive 00:03:48.462 suites 1 1 n/a 0 0 00:03:48.462 tests 1 1 1 0 0 00:03:48.462 asserts 15 15 15 0 n/a 00:03:48.462 00:03:48.462 Elapsed time = 0.010 seconds 00:03:48.462 00:03:48.462 real 0m0.069s 00:03:48.462 user 0m0.021s 00:03:48.462 sys 0m0.048s 00:03:48.462 11:38:56 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.462 11:38:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:48.462 ************************************ 00:03:48.462 END TEST env_mem_callbacks 00:03:48.462 ************************************ 00:03:48.462 00:03:48.462 real 0m7.516s 00:03:48.462 user 0m1.024s 00:03:48.462 sys 0m1.054s 00:03:48.462 11:38:56 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.462 11:38:56 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.462 ************************************ 00:03:48.462 END TEST env 00:03:48.462 ************************************ 00:03:48.462 11:38:56 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:48.462 11:38:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.462 11:38:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.462 11:38:56 -- common/autotest_common.sh@10 -- # set +x 00:03:48.462 ************************************ 00:03:48.462 START TEST rpc 00:03:48.462 ************************************ 00:03:48.462 11:38:56 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:48.462 * Looking for test storage... 00:03:48.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:48.462 11:38:56 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:48.462 11:38:56 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:48.462 11:38:56 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:48.723 11:38:56 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:48.724 11:38:56 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:48.724 11:38:56 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:48.724 11:38:56 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:48.724 11:38:56 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.724 11:38:56 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:48.724 11:38:56 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:48.724 11:38:56 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:48.724 11:38:56 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:48.724 11:38:56 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:48.724 11:38:56 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:48.724 11:38:56 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:48.724 11:38:56 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:48.724 11:38:56 rpc -- scripts/common.sh@345 -- # : 1 00:03:48.724 11:38:56 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:48.724 11:38:56 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.724 11:38:56 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:48.724 11:38:56 rpc -- scripts/common.sh@353 -- # local d=1 00:03:48.724 11:38:56 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.724 11:38:56 rpc -- scripts/common.sh@355 -- # echo 1 00:03:48.724 11:38:56 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:48.724 11:38:56 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:48.724 11:38:56 rpc -- scripts/common.sh@353 -- # local d=2 00:03:48.724 11:38:56 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.724 11:38:56 rpc -- scripts/common.sh@355 -- # echo 2 00:03:48.724 11:38:56 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:48.724 11:38:56 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:48.724 11:38:56 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:48.724 11:38:56 rpc -- scripts/common.sh@368 -- # return 0 00:03:48.724 11:38:56 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.724 11:38:56 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:48.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.724 --rc genhtml_branch_coverage=1 00:03:48.724 --rc genhtml_function_coverage=1 00:03:48.724 --rc genhtml_legend=1 00:03:48.724 --rc geninfo_all_blocks=1 00:03:48.724 --rc geninfo_unexecuted_blocks=1 00:03:48.724 00:03:48.724 ' 00:03:48.724 11:38:56 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:48.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.724 --rc genhtml_branch_coverage=1 00:03:48.724 --rc genhtml_function_coverage=1 00:03:48.724 --rc genhtml_legend=1 00:03:48.724 --rc geninfo_all_blocks=1 00:03:48.724 --rc geninfo_unexecuted_blocks=1 00:03:48.724 00:03:48.724 ' 00:03:48.724 11:38:56 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:48.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.724 --rc genhtml_branch_coverage=1 00:03:48.724 --rc genhtml_function_coverage=1 00:03:48.724 --rc genhtml_legend=1 00:03:48.724 --rc geninfo_all_blocks=1 00:03:48.724 --rc geninfo_unexecuted_blocks=1 00:03:48.724 00:03:48.724 ' 00:03:48.724 11:38:56 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:48.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.724 --rc genhtml_branch_coverage=1 00:03:48.724 --rc genhtml_function_coverage=1 00:03:48.724 --rc genhtml_legend=1 00:03:48.724 --rc geninfo_all_blocks=1 00:03:48.724 --rc geninfo_unexecuted_blocks=1 00:03:48.724 00:03:48.724 ' 00:03:48.724 11:38:56 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3998745 00:03:48.724 11:38:56 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:48.724 11:38:56 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:48.724 11:38:56 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3998745 00:03:48.724 11:38:56 rpc -- common/autotest_common.sh@835 -- # '[' -z 3998745 ']' 00:03:48.724 11:38:56 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:48.724 11:38:56 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:48.724 11:38:56 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:48.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:48.724 11:38:56 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:48.724 11:38:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.724 [2024-12-09 11:38:56.488911] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:03:48.724 [2024-12-09 11:38:56.488980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3998745 ] 00:03:48.724 [2024-12-09 11:38:56.580721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.985 [2024-12-09 11:38:56.632383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:48.985 [2024-12-09 11:38:56.632436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3998745' to capture a snapshot of events at runtime. 00:03:48.985 [2024-12-09 11:38:56.632445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:48.985 [2024-12-09 11:38:56.632453] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:48.985 [2024-12-09 11:38:56.632459] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3998745 for offline analysis/debug. 00:03:48.985 [2024-12-09 11:38:56.633231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.558 11:38:57 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:49.558 11:38:57 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:49.558 11:38:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:49.558 11:38:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:49.559 11:38:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:49.559 11:38:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:49.559 11:38:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.559 11:38:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.559 11:38:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.559 ************************************ 00:03:49.559 START TEST rpc_integrity 00:03:49.559 ************************************ 00:03:49.559 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:49.559 11:38:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:49.559 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.559 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.559 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.559 11:38:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:49.559 11:38:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:49.559 11:38:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:49.559 11:38:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:49.559 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.559 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.559 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.559 11:38:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:49.559 11:38:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:49.559 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.559 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.559 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.559 11:38:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:49.559 { 00:03:49.559 "name": "Malloc0", 00:03:49.559 "aliases": [ 00:03:49.559 "496230d8-381f-4055-9ef6-9e62694fafe0" 00:03:49.559 ], 00:03:49.559 "product_name": "Malloc disk", 00:03:49.559 "block_size": 512, 00:03:49.559 "num_blocks": 16384, 00:03:49.559 "uuid": "496230d8-381f-4055-9ef6-9e62694fafe0", 00:03:49.559 "assigned_rate_limits": { 00:03:49.559 "rw_ios_per_sec": 0, 00:03:49.559 "rw_mbytes_per_sec": 0, 00:03:49.559 "r_mbytes_per_sec": 0, 00:03:49.559 "w_mbytes_per_sec": 0 00:03:49.559 }, 00:03:49.559 "claimed": false, 00:03:49.559 "zoned": false, 00:03:49.559 "supported_io_types": { 00:03:49.559 "read": true, 00:03:49.559 "write": true, 00:03:49.559 "unmap": true, 00:03:49.559 "flush": true, 00:03:49.559 "reset": true, 00:03:49.559 "nvme_admin": false, 00:03:49.559 "nvme_io": false, 00:03:49.559 "nvme_io_md": false, 00:03:49.559 "write_zeroes": true, 00:03:49.559 "zcopy": true, 00:03:49.559 "get_zone_info": false, 00:03:49.559 "zone_management": false, 00:03:49.559 "zone_append": false, 00:03:49.559 "compare": false, 00:03:49.559 "compare_and_write": false, 00:03:49.559 "abort": true, 00:03:49.559 "seek_hole": false, 00:03:49.559 "seek_data": false, 00:03:49.559 "copy": true, 00:03:49.559 "nvme_iov_md": false 00:03:49.559 }, 00:03:49.559 "memory_domains": [ 00:03:49.559 { 00:03:49.559 "dma_device_id": "system", 00:03:49.559 "dma_device_type": 1 00:03:49.559 }, 00:03:49.559 { 00:03:49.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.559 "dma_device_type": 2 00:03:49.559 } 00:03:49.559 ], 00:03:49.559 "driver_specific": {} 00:03:49.559 } 00:03:49.559 ]' 00:03:49.559 11:38:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:49.559 11:38:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:49.559 11:38:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:49.559 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.559 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.821 [2024-12-09 11:38:57.445276] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:49.821 [2024-12-09 11:38:57.445322] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:49.821 [2024-12-09 11:38:57.445339] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc44f80 00:03:49.821 [2024-12-09 11:38:57.445347] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:49.821 [2024-12-09 11:38:57.446959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:49.821 [2024-12-09 11:38:57.447001] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:49.821 Passthru0 00:03:49.821 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.821 11:38:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:49.821 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.821 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.821 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.821 11:38:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:49.821 { 00:03:49.821 "name": "Malloc0", 00:03:49.821 "aliases": [ 00:03:49.821 "496230d8-381f-4055-9ef6-9e62694fafe0" 00:03:49.821 ], 00:03:49.821 "product_name": "Malloc disk", 00:03:49.821 "block_size": 512, 00:03:49.821 "num_blocks": 16384, 00:03:49.821 "uuid": "496230d8-381f-4055-9ef6-9e62694fafe0", 00:03:49.821 "assigned_rate_limits": { 00:03:49.821 "rw_ios_per_sec": 0, 00:03:49.821 "rw_mbytes_per_sec": 0, 00:03:49.821 "r_mbytes_per_sec": 0, 00:03:49.821 "w_mbytes_per_sec": 0 00:03:49.821 }, 00:03:49.821 "claimed": true, 00:03:49.821 "claim_type": "exclusive_write", 00:03:49.821 "zoned": false, 00:03:49.821 "supported_io_types": { 00:03:49.821 "read": true, 00:03:49.821 "write": true, 00:03:49.821 "unmap": true, 00:03:49.821 "flush": true, 00:03:49.821 "reset": true, 00:03:49.821 "nvme_admin": false, 00:03:49.821 "nvme_io": false, 00:03:49.821 "nvme_io_md": false, 00:03:49.821 "write_zeroes": true, 00:03:49.821 "zcopy": true, 00:03:49.821 "get_zone_info": false, 00:03:49.821 "zone_management": false, 00:03:49.821 "zone_append": false, 00:03:49.821 "compare": false, 00:03:49.821 "compare_and_write": false, 00:03:49.821 "abort": true, 00:03:49.821 "seek_hole": false, 00:03:49.821 "seek_data": false, 00:03:49.821 "copy": true, 00:03:49.821 "nvme_iov_md": false 00:03:49.821 }, 00:03:49.821 "memory_domains": [ 00:03:49.821 { 00:03:49.821 "dma_device_id": "system", 00:03:49.821 "dma_device_type": 1 00:03:49.821 }, 00:03:49.821 { 00:03:49.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.821 "dma_device_type": 2 00:03:49.821 } 00:03:49.821 ], 00:03:49.821 "driver_specific": {} 00:03:49.821 }, 00:03:49.821 { 00:03:49.821 "name": "Passthru0", 00:03:49.821 "aliases": [ 00:03:49.821 "319eaf62-654d-5b0f-87c1-c2622549ebef" 00:03:49.821 ], 00:03:49.821 "product_name": "passthru", 00:03:49.821 "block_size": 512, 00:03:49.821 "num_blocks": 16384, 00:03:49.821 "uuid": "319eaf62-654d-5b0f-87c1-c2622549ebef", 00:03:49.821 "assigned_rate_limits": { 00:03:49.821 "rw_ios_per_sec": 0, 00:03:49.821 "rw_mbytes_per_sec": 0, 00:03:49.821 "r_mbytes_per_sec": 0, 00:03:49.821 "w_mbytes_per_sec": 0 00:03:49.821 }, 00:03:49.821 "claimed": false, 00:03:49.821 "zoned": false, 00:03:49.821 "supported_io_types": { 00:03:49.821 "read": true, 00:03:49.821 "write": true, 00:03:49.821 "unmap": true, 00:03:49.821 "flush": true, 00:03:49.821 "reset": true, 00:03:49.821 "nvme_admin": false, 00:03:49.821 "nvme_io": false, 00:03:49.821 "nvme_io_md": false, 00:03:49.821 "write_zeroes": true, 00:03:49.821 "zcopy": true, 00:03:49.821 "get_zone_info": false, 00:03:49.821 "zone_management": false, 00:03:49.821 "zone_append": false, 00:03:49.821 "compare": false, 00:03:49.821 "compare_and_write": false, 00:03:49.821 "abort": true, 00:03:49.821 "seek_hole": false, 00:03:49.821 "seek_data": false, 00:03:49.821 "copy": true, 00:03:49.821 "nvme_iov_md": false 00:03:49.821 }, 00:03:49.821 "memory_domains": [ 00:03:49.821 { 00:03:49.821 "dma_device_id": "system", 00:03:49.821 "dma_device_type": 1 00:03:49.821 }, 00:03:49.821 { 00:03:49.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.821 "dma_device_type": 2 00:03:49.821 } 00:03:49.821 ], 00:03:49.821 "driver_specific": { 00:03:49.821 "passthru": { 00:03:49.821 "name": "Passthru0", 00:03:49.821 "base_bdev_name": "Malloc0" 00:03:49.821 } 00:03:49.821 } 00:03:49.821 } 00:03:49.821 ]' 00:03:49.821 11:38:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:49.821 11:38:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:49.821 11:38:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:49.821 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.821 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.821 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.821 11:38:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:49.821 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.821 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.821 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.821 11:38:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:49.821 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.821 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.821 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.821 11:38:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:49.821 11:38:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:49.821 11:38:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:49.821 00:03:49.821 real 0m0.298s 00:03:49.821 user 0m0.189s 00:03:49.821 sys 0m0.044s 00:03:49.821 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.821 11:38:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.821 ************************************ 00:03:49.821 END TEST rpc_integrity 00:03:49.821 ************************************ 00:03:49.821 11:38:57 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:49.821 11:38:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.821 11:38:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.821 11:38:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.821 ************************************ 00:03:49.821 START TEST rpc_plugins 00:03:49.821 ************************************ 00:03:49.821 11:38:57 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:49.821 11:38:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:49.821 11:38:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.821 11:38:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.821 11:38:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.821 11:38:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:49.821 11:38:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:49.821 11:38:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.821 11:38:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:50.083 11:38:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.083 11:38:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:50.083 { 00:03:50.083 "name": "Malloc1", 00:03:50.083 "aliases": [ 00:03:50.083 "ed01b191-35e0-436d-988a-d3e5ad08b2ee" 00:03:50.083 ], 00:03:50.083 "product_name": "Malloc disk", 00:03:50.083 "block_size": 4096, 00:03:50.083 "num_blocks": 256, 00:03:50.083 "uuid": "ed01b191-35e0-436d-988a-d3e5ad08b2ee", 00:03:50.083 "assigned_rate_limits": { 00:03:50.083 "rw_ios_per_sec": 0, 00:03:50.083 "rw_mbytes_per_sec": 0, 00:03:50.083 "r_mbytes_per_sec": 0, 00:03:50.083 "w_mbytes_per_sec": 0 00:03:50.083 }, 00:03:50.083 "claimed": false, 00:03:50.083 "zoned": false, 00:03:50.083 "supported_io_types": { 00:03:50.083 "read": true, 00:03:50.083 "write": true, 00:03:50.083 "unmap": true, 00:03:50.083 "flush": true, 00:03:50.083 "reset": true, 00:03:50.083 "nvme_admin": false, 00:03:50.083 "nvme_io": false, 00:03:50.083 "nvme_io_md": false, 00:03:50.083 "write_zeroes": true, 00:03:50.083 "zcopy": true, 00:03:50.083 "get_zone_info": false, 00:03:50.083 "zone_management": false, 00:03:50.083 "zone_append": false, 00:03:50.083 "compare": false, 00:03:50.083 "compare_and_write": false, 00:03:50.083 "abort": true, 00:03:50.083 "seek_hole": false, 00:03:50.083 "seek_data": false, 00:03:50.083 "copy": true, 00:03:50.083 "nvme_iov_md": false 00:03:50.083 }, 00:03:50.083 "memory_domains": [ 00:03:50.083 { 00:03:50.083 "dma_device_id": "system", 00:03:50.083 "dma_device_type": 1 00:03:50.083 }, 00:03:50.083 { 00:03:50.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.084 "dma_device_type": 2 00:03:50.084 } 00:03:50.084 ], 00:03:50.084 "driver_specific": {} 00:03:50.084 } 00:03:50.084 ]' 00:03:50.084 11:38:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:50.084 11:38:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:50.084 11:38:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:50.084 11:38:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.084 11:38:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:50.084 11:38:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.084 11:38:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:50.084 11:38:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.084 11:38:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:50.084 11:38:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.084 11:38:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:50.084 11:38:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:50.084 11:38:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:50.084 00:03:50.084 real 0m0.151s 00:03:50.084 user 0m0.095s 00:03:50.084 sys 0m0.018s 00:03:50.084 11:38:57 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.084 11:38:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:50.084 ************************************ 00:03:50.084 END TEST rpc_plugins 00:03:50.084 ************************************ 00:03:50.084 11:38:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:50.084 11:38:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.084 11:38:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.084 11:38:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.084 ************************************ 00:03:50.084 START TEST rpc_trace_cmd_test 00:03:50.084 ************************************ 00:03:50.084 11:38:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:50.084 11:38:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:50.084 11:38:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:50.084 11:38:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.084 11:38:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:50.084 11:38:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.084 11:38:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:50.084 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3998745", 00:03:50.084 "tpoint_group_mask": "0x8", 00:03:50.084 "iscsi_conn": { 00:03:50.084 "mask": "0x2", 00:03:50.084 "tpoint_mask": "0x0" 00:03:50.084 }, 00:03:50.084 "scsi": { 00:03:50.084 "mask": "0x4", 00:03:50.084 "tpoint_mask": "0x0" 00:03:50.084 }, 00:03:50.084 "bdev": { 00:03:50.084 "mask": "0x8", 00:03:50.084 "tpoint_mask": "0xffffffffffffffff" 00:03:50.084 }, 00:03:50.084 "nvmf_rdma": { 00:03:50.084 "mask": "0x10", 00:03:50.084 "tpoint_mask": "0x0" 00:03:50.084 }, 00:03:50.084 "nvmf_tcp": { 00:03:50.084 "mask": "0x20", 00:03:50.084 "tpoint_mask": "0x0" 00:03:50.084 }, 00:03:50.084 "ftl": { 00:03:50.084 "mask": "0x40", 00:03:50.084 "tpoint_mask": "0x0" 00:03:50.084 }, 00:03:50.084 "blobfs": { 00:03:50.084 "mask": "0x80", 00:03:50.084 "tpoint_mask": "0x0" 00:03:50.084 }, 00:03:50.084 "dsa": { 00:03:50.084 "mask": "0x200", 00:03:50.084 "tpoint_mask": "0x0" 00:03:50.084 }, 00:03:50.084 "thread": { 00:03:50.084 "mask": "0x400", 00:03:50.084 "tpoint_mask": "0x0" 00:03:50.084 }, 00:03:50.084 "nvme_pcie": { 00:03:50.084 "mask": "0x800", 00:03:50.084 "tpoint_mask": "0x0" 00:03:50.084 }, 00:03:50.084 "iaa": { 00:03:50.084 "mask": "0x1000", 00:03:50.084 "tpoint_mask": "0x0" 00:03:50.084 }, 00:03:50.084 "nvme_tcp": { 00:03:50.084 "mask": "0x2000", 00:03:50.084 "tpoint_mask": "0x0" 00:03:50.084 }, 00:03:50.084 "bdev_nvme": { 00:03:50.084 "mask": "0x4000", 00:03:50.084 "tpoint_mask": "0x0" 00:03:50.084 }, 00:03:50.084 "sock": { 00:03:50.084 "mask": "0x8000", 00:03:50.084 "tpoint_mask": "0x0" 00:03:50.084 }, 00:03:50.084 "blob": { 00:03:50.084 "mask": "0x10000", 00:03:50.084 "tpoint_mask": "0x0" 00:03:50.084 }, 00:03:50.084 "bdev_raid": { 00:03:50.084 "mask": "0x20000", 00:03:50.084 "tpoint_mask": "0x0" 00:03:50.084 }, 00:03:50.084 "scheduler": { 00:03:50.084 "mask": "0x40000", 00:03:50.084 "tpoint_mask": "0x0" 00:03:50.084 } 00:03:50.084 }' 00:03:50.084 11:38:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:50.345 11:38:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:50.345 11:38:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:50.345 11:38:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:50.345 11:38:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:50.345 11:38:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:50.345 11:38:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:50.345 11:38:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:50.345 11:38:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:50.345 11:38:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:50.345 00:03:50.345 real 0m0.211s 00:03:50.345 user 0m0.177s 00:03:50.345 sys 0m0.028s 00:03:50.345 11:38:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.345 11:38:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:50.345 ************************************ 00:03:50.345 END TEST rpc_trace_cmd_test 00:03:50.345 ************************************ 00:03:50.345 11:38:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:50.345 11:38:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:50.345 11:38:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:50.345 11:38:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.345 11:38:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.345 11:38:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.345 ************************************ 00:03:50.345 START TEST rpc_daemon_integrity 00:03:50.345 ************************************ 00:03:50.345 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:50.345 11:38:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:50.345 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.345 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.345 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.345 11:38:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:50.345 11:38:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:50.607 11:38:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:50.607 11:38:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:50.607 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.607 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.607 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.607 11:38:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:50.607 11:38:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:50.607 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.607 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.607 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.607 11:38:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:50.607 { 00:03:50.607 "name": "Malloc2", 00:03:50.607 "aliases": [ 00:03:50.607 "b7b843b3-4945-4766-ae67-1c6fe83411b0" 00:03:50.607 ], 00:03:50.607 "product_name": "Malloc disk", 00:03:50.607 "block_size": 512, 00:03:50.607 "num_blocks": 16384, 00:03:50.607 "uuid": "b7b843b3-4945-4766-ae67-1c6fe83411b0", 00:03:50.607 "assigned_rate_limits": { 00:03:50.607 "rw_ios_per_sec": 0, 00:03:50.607 "rw_mbytes_per_sec": 0, 00:03:50.607 "r_mbytes_per_sec": 0, 00:03:50.607 "w_mbytes_per_sec": 0 00:03:50.607 }, 00:03:50.607 "claimed": false, 00:03:50.607 "zoned": false, 00:03:50.607 "supported_io_types": { 00:03:50.607 "read": true, 00:03:50.607 "write": true, 00:03:50.607 "unmap": true, 00:03:50.607 "flush": true, 00:03:50.607 "reset": true, 00:03:50.607 "nvme_admin": false, 00:03:50.607 "nvme_io": false, 00:03:50.607 "nvme_io_md": false, 00:03:50.607 "write_zeroes": true, 00:03:50.607 "zcopy": true, 00:03:50.607 "get_zone_info": false, 00:03:50.607 "zone_management": false, 00:03:50.607 "zone_append": false, 00:03:50.607 "compare": false, 00:03:50.607 "compare_and_write": false, 00:03:50.607 "abort": true, 00:03:50.607 "seek_hole": false, 00:03:50.607 "seek_data": false, 00:03:50.607 "copy": true, 00:03:50.607 "nvme_iov_md": false 00:03:50.607 }, 00:03:50.607 "memory_domains": [ 00:03:50.607 { 00:03:50.607 "dma_device_id": "system", 00:03:50.607 "dma_device_type": 1 00:03:50.607 }, 00:03:50.607 { 00:03:50.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.607 "dma_device_type": 2 00:03:50.607 } 00:03:50.607 ], 00:03:50.607 "driver_specific": {} 00:03:50.607 } 00:03:50.607 ]' 00:03:50.607 11:38:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:50.607 11:38:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:50.607 11:38:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:50.607 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.607 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.607 [2024-12-09 11:38:58.347697] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:50.607 [2024-12-09 11:38:58.347738] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:50.607 [2024-12-09 11:38:58.347754] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd762f0 00:03:50.607 [2024-12-09 11:38:58.347762] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:50.607 [2024-12-09 11:38:58.349258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:50.607 [2024-12-09 11:38:58.349293] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:50.607 Passthru0 00:03:50.607 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.607 11:38:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:50.608 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.608 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.608 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.608 11:38:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:50.608 { 00:03:50.608 "name": "Malloc2", 00:03:50.608 "aliases": [ 00:03:50.608 "b7b843b3-4945-4766-ae67-1c6fe83411b0" 00:03:50.608 ], 00:03:50.608 "product_name": "Malloc disk", 00:03:50.608 "block_size": 512, 00:03:50.608 "num_blocks": 16384, 00:03:50.608 "uuid": "b7b843b3-4945-4766-ae67-1c6fe83411b0", 00:03:50.608 "assigned_rate_limits": { 00:03:50.608 "rw_ios_per_sec": 0, 00:03:50.608 "rw_mbytes_per_sec": 0, 00:03:50.608 "r_mbytes_per_sec": 0, 00:03:50.608 "w_mbytes_per_sec": 0 00:03:50.608 }, 00:03:50.608 "claimed": true, 00:03:50.608 "claim_type": "exclusive_write", 00:03:50.608 "zoned": false, 00:03:50.608 "supported_io_types": { 00:03:50.608 "read": true, 00:03:50.608 "write": true, 00:03:50.608 "unmap": true, 00:03:50.608 "flush": true, 00:03:50.608 "reset": true, 00:03:50.608 "nvme_admin": false, 00:03:50.608 "nvme_io": false, 00:03:50.608 "nvme_io_md": false, 00:03:50.608 "write_zeroes": true, 00:03:50.608 "zcopy": true, 00:03:50.608 "get_zone_info": false, 00:03:50.608 "zone_management": false, 00:03:50.608 "zone_append": false, 00:03:50.608 "compare": false, 00:03:50.608 "compare_and_write": false, 00:03:50.608 "abort": true, 00:03:50.608 "seek_hole": false, 00:03:50.608 "seek_data": false, 00:03:50.608 "copy": true, 00:03:50.608 "nvme_iov_md": false 00:03:50.608 }, 00:03:50.608 "memory_domains": [ 00:03:50.608 { 00:03:50.608 "dma_device_id": "system", 00:03:50.608 "dma_device_type": 1 00:03:50.608 }, 00:03:50.608 { 00:03:50.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.608 "dma_device_type": 2 00:03:50.608 } 00:03:50.608 ], 00:03:50.608 "driver_specific": {} 00:03:50.608 }, 00:03:50.608 { 00:03:50.608 "name": "Passthru0", 00:03:50.608 "aliases": [ 00:03:50.608 "25810d0d-ead7-52f2-893c-e9cb7a8ffaa8" 00:03:50.608 ], 00:03:50.608 "product_name": "passthru", 00:03:50.608 "block_size": 512, 00:03:50.608 "num_blocks": 16384, 00:03:50.608 "uuid": "25810d0d-ead7-52f2-893c-e9cb7a8ffaa8", 00:03:50.608 "assigned_rate_limits": { 00:03:50.608 "rw_ios_per_sec": 0, 00:03:50.608 "rw_mbytes_per_sec": 0, 00:03:50.608 "r_mbytes_per_sec": 0, 00:03:50.608 "w_mbytes_per_sec": 0 00:03:50.608 }, 00:03:50.608 "claimed": false, 00:03:50.608 "zoned": false, 00:03:50.608 "supported_io_types": { 00:03:50.608 "read": true, 00:03:50.608 "write": true, 00:03:50.608 "unmap": true, 00:03:50.608 "flush": true, 00:03:50.608 "reset": true, 00:03:50.608 "nvme_admin": false, 00:03:50.608 "nvme_io": false, 00:03:50.608 "nvme_io_md": false, 00:03:50.608 "write_zeroes": true, 00:03:50.608 "zcopy": true, 00:03:50.608 "get_zone_info": false, 00:03:50.608 "zone_management": false, 00:03:50.608 "zone_append": false, 00:03:50.608 "compare": false, 00:03:50.608 "compare_and_write": false, 00:03:50.608 "abort": true, 00:03:50.608 "seek_hole": false, 00:03:50.608 "seek_data": false, 00:03:50.608 "copy": true, 00:03:50.608 "nvme_iov_md": false 00:03:50.608 }, 00:03:50.608 "memory_domains": [ 00:03:50.608 { 00:03:50.608 "dma_device_id": "system", 00:03:50.608 "dma_device_type": 1 00:03:50.608 }, 00:03:50.608 { 00:03:50.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.608 "dma_device_type": 2 00:03:50.608 } 00:03:50.608 ], 00:03:50.608 "driver_specific": { 00:03:50.608 "passthru": { 00:03:50.608 "name": "Passthru0", 00:03:50.608 "base_bdev_name": "Malloc2" 00:03:50.608 } 00:03:50.608 } 00:03:50.608 } 00:03:50.608 ]' 00:03:50.608 11:38:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:50.608 11:38:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:50.608 11:38:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:50.608 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.608 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.608 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.608 11:38:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:50.608 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.608 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.608 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.608 11:38:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:50.608 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.608 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.608 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.608 11:38:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:50.608 11:38:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:50.869 11:38:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:50.869 00:03:50.869 real 0m0.305s 00:03:50.869 user 0m0.195s 00:03:50.869 sys 0m0.046s 00:03:50.869 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.869 11:38:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.869 ************************************ 00:03:50.869 END TEST rpc_daemon_integrity 00:03:50.869 ************************************ 00:03:50.869 11:38:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:50.869 11:38:58 rpc -- rpc/rpc.sh@84 -- # killprocess 3998745 00:03:50.869 11:38:58 rpc -- common/autotest_common.sh@954 -- # '[' -z 3998745 ']' 00:03:50.869 11:38:58 rpc -- common/autotest_common.sh@958 -- # kill -0 3998745 00:03:50.869 11:38:58 rpc -- common/autotest_common.sh@959 -- # uname 00:03:50.869 11:38:58 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:50.869 11:38:58 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3998745 00:03:50.869 11:38:58 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:50.869 11:38:58 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:50.869 11:38:58 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3998745' 00:03:50.869 killing process with pid 3998745 00:03:50.869 11:38:58 rpc -- common/autotest_common.sh@973 -- # kill 3998745 00:03:50.869 11:38:58 rpc -- common/autotest_common.sh@978 -- # wait 3998745 00:03:51.130 00:03:51.130 real 0m2.638s 00:03:51.130 user 0m3.355s 00:03:51.130 sys 0m0.809s 00:03:51.130 11:38:58 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.130 11:38:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.130 ************************************ 00:03:51.130 END TEST rpc 00:03:51.130 ************************************ 00:03:51.130 11:38:58 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:51.130 11:38:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.130 11:38:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.130 11:38:58 -- common/autotest_common.sh@10 -- # set +x 00:03:51.130 ************************************ 00:03:51.130 START TEST skip_rpc 00:03:51.130 ************************************ 00:03:51.130 11:38:58 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:51.391 * Looking for test storage... 00:03:51.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:51.391 11:38:59 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:51.391 11:38:59 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:51.391 11:38:59 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:51.391 11:38:59 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:51.391 11:38:59 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:51.391 11:38:59 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.391 11:38:59 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:51.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.391 --rc genhtml_branch_coverage=1 00:03:51.391 --rc genhtml_function_coverage=1 00:03:51.391 --rc genhtml_legend=1 00:03:51.391 --rc geninfo_all_blocks=1 00:03:51.391 --rc geninfo_unexecuted_blocks=1 00:03:51.391 00:03:51.391 ' 00:03:51.391 11:38:59 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:51.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.391 --rc genhtml_branch_coverage=1 00:03:51.391 --rc genhtml_function_coverage=1 00:03:51.391 --rc genhtml_legend=1 00:03:51.391 --rc geninfo_all_blocks=1 00:03:51.391 --rc geninfo_unexecuted_blocks=1 00:03:51.391 00:03:51.391 ' 00:03:51.391 11:38:59 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:51.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.391 --rc genhtml_branch_coverage=1 00:03:51.391 --rc genhtml_function_coverage=1 00:03:51.391 --rc genhtml_legend=1 00:03:51.391 --rc geninfo_all_blocks=1 00:03:51.391 --rc geninfo_unexecuted_blocks=1 00:03:51.391 00:03:51.391 ' 00:03:51.391 11:38:59 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:51.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.391 --rc genhtml_branch_coverage=1 00:03:51.391 --rc genhtml_function_coverage=1 00:03:51.391 --rc genhtml_legend=1 00:03:51.391 --rc geninfo_all_blocks=1 00:03:51.391 --rc geninfo_unexecuted_blocks=1 00:03:51.391 00:03:51.391 ' 00:03:51.391 11:38:59 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:51.391 11:38:59 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:51.391 11:38:59 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:51.391 11:38:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.391 11:38:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.391 11:38:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.391 ************************************ 00:03:51.391 START TEST skip_rpc 00:03:51.391 ************************************ 00:03:51.391 11:38:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:51.391 11:38:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3999593 00:03:51.391 11:38:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:51.391 11:38:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:51.391 11:38:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:51.391 [2024-12-09 11:38:59.241248] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:03:51.391 [2024-12-09 11:38:59.241296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3999593 ] 00:03:51.651 [2024-12-09 11:38:59.328148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.651 [2024-12-09 11:38:59.373109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.934 11:39:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:56.934 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:56.934 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:56.934 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:56.934 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:56.934 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:56.935 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:56.935 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:56.935 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.935 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.935 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:56.935 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:56.935 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:56.935 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:56.935 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:56.935 11:39:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:56.935 11:39:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3999593 00:03:56.935 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3999593 ']' 00:03:56.935 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3999593 00:03:56.935 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:56.935 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:56.935 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3999593 00:03:56.935 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:56.935 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:56.935 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3999593' 00:03:56.935 killing process with pid 3999593 00:03:56.935 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3999593 00:03:56.935 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3999593 00:03:56.935 00:03:56.935 real 0m5.268s 00:03:56.935 user 0m5.037s 00:03:56.935 sys 0m0.281s 00:03:56.935 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.935 11:39:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.935 ************************************ 00:03:56.935 END TEST skip_rpc 00:03:56.935 ************************************ 00:03:56.935 11:39:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:56.935 11:39:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.935 11:39:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.935 11:39:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.935 ************************************ 00:03:56.935 START TEST skip_rpc_with_json 00:03:56.935 ************************************ 00:03:56.935 11:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:56.935 11:39:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:56.935 11:39:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=4000741 00:03:56.935 11:39:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:56.935 11:39:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 4000741 00:03:56.935 11:39:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:56.935 11:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 4000741 ']' 00:03:56.935 11:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:56.935 11:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:56.935 11:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:56.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:56.935 11:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:56.935 11:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:56.935 [2024-12-09 11:39:04.588365] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:03:56.935 [2024-12-09 11:39:04.588417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4000741 ] 00:03:56.935 [2024-12-09 11:39:04.673262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.935 [2024-12-09 11:39:04.705659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.505 11:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:57.505 11:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:57.505 11:39:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:57.505 11:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.505 11:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:57.505 [2024-12-09 11:39:05.375511] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:57.505 request: 00:03:57.505 { 00:03:57.505 "trtype": "tcp", 00:03:57.505 "method": "nvmf_get_transports", 00:03:57.505 "req_id": 1 00:03:57.505 } 00:03:57.505 Got JSON-RPC error response 00:03:57.505 response: 00:03:57.505 { 00:03:57.505 "code": -19, 00:03:57.505 "message": "No such device" 00:03:57.505 } 00:03:57.505 11:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:57.505 11:39:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:57.505 11:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.505 11:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:57.505 [2024-12-09 11:39:05.387605] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:57.766 11:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.766 11:39:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:57.766 11:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.766 11:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:57.766 11:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.766 11:39:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:57.766 { 00:03:57.766 "subsystems": [ 00:03:57.766 { 00:03:57.766 "subsystem": "fsdev", 00:03:57.766 "config": [ 00:03:57.766 { 00:03:57.766 "method": "fsdev_set_opts", 00:03:57.766 "params": { 00:03:57.766 "fsdev_io_pool_size": 65535, 00:03:57.766 "fsdev_io_cache_size": 256 00:03:57.766 } 00:03:57.766 } 00:03:57.766 ] 00:03:57.766 }, 00:03:57.766 { 00:03:57.766 "subsystem": "vfio_user_target", 00:03:57.766 "config": null 00:03:57.766 }, 00:03:57.766 { 00:03:57.766 "subsystem": "keyring", 00:03:57.766 "config": [] 00:03:57.766 }, 00:03:57.766 { 00:03:57.766 "subsystem": "iobuf", 00:03:57.766 "config": [ 00:03:57.766 { 00:03:57.766 "method": "iobuf_set_options", 00:03:57.766 "params": { 00:03:57.766 "small_pool_count": 8192, 00:03:57.766 "large_pool_count": 1024, 00:03:57.766 "small_bufsize": 8192, 00:03:57.766 "large_bufsize": 135168, 00:03:57.766 "enable_numa": false 00:03:57.766 } 00:03:57.766 } 00:03:57.766 ] 00:03:57.766 }, 00:03:57.766 { 00:03:57.766 "subsystem": "sock", 00:03:57.766 "config": [ 00:03:57.766 { 00:03:57.766 "method": "sock_set_default_impl", 00:03:57.766 "params": { 00:03:57.766 "impl_name": "posix" 00:03:57.766 } 00:03:57.766 }, 00:03:57.766 { 00:03:57.766 "method": "sock_impl_set_options", 00:03:57.766 "params": { 00:03:57.766 "impl_name": "ssl", 00:03:57.766 "recv_buf_size": 4096, 00:03:57.766 "send_buf_size": 4096, 00:03:57.766 "enable_recv_pipe": true, 00:03:57.766 "enable_quickack": false, 00:03:57.766 "enable_placement_id": 0, 00:03:57.766 "enable_zerocopy_send_server": true, 00:03:57.766 "enable_zerocopy_send_client": false, 00:03:57.766 "zerocopy_threshold": 0, 00:03:57.766 "tls_version": 0, 00:03:57.766 "enable_ktls": false 00:03:57.766 } 00:03:57.766 }, 00:03:57.766 { 00:03:57.766 "method": "sock_impl_set_options", 00:03:57.766 "params": { 00:03:57.766 "impl_name": "posix", 00:03:57.766 "recv_buf_size": 2097152, 00:03:57.766 "send_buf_size": 2097152, 00:03:57.766 "enable_recv_pipe": true, 00:03:57.766 "enable_quickack": false, 00:03:57.766 "enable_placement_id": 0, 00:03:57.766 "enable_zerocopy_send_server": true, 00:03:57.766 "enable_zerocopy_send_client": false, 00:03:57.766 "zerocopy_threshold": 0, 00:03:57.766 "tls_version": 0, 00:03:57.766 "enable_ktls": false 00:03:57.766 } 00:03:57.766 } 00:03:57.766 ] 00:03:57.766 }, 00:03:57.766 { 00:03:57.766 "subsystem": "vmd", 00:03:57.766 "config": [] 00:03:57.766 }, 00:03:57.766 { 00:03:57.766 "subsystem": "accel", 00:03:57.766 "config": [ 00:03:57.766 { 00:03:57.766 "method": "accel_set_options", 00:03:57.766 "params": { 00:03:57.766 "small_cache_size": 128, 00:03:57.766 "large_cache_size": 16, 00:03:57.766 "task_count": 2048, 00:03:57.766 "sequence_count": 2048, 00:03:57.766 "buf_count": 2048 00:03:57.766 } 00:03:57.766 } 00:03:57.766 ] 00:03:57.766 }, 00:03:57.766 { 00:03:57.766 "subsystem": "bdev", 00:03:57.766 "config": [ 00:03:57.766 { 00:03:57.766 "method": "bdev_set_options", 00:03:57.766 "params": { 00:03:57.766 "bdev_io_pool_size": 65535, 00:03:57.766 "bdev_io_cache_size": 256, 00:03:57.766 "bdev_auto_examine": true, 00:03:57.766 "iobuf_small_cache_size": 128, 00:03:57.766 "iobuf_large_cache_size": 16 00:03:57.766 } 00:03:57.766 }, 00:03:57.766 { 00:03:57.766 "method": "bdev_raid_set_options", 00:03:57.766 "params": { 00:03:57.766 "process_window_size_kb": 1024, 00:03:57.766 "process_max_bandwidth_mb_sec": 0 00:03:57.766 } 00:03:57.766 }, 00:03:57.766 { 00:03:57.766 "method": "bdev_iscsi_set_options", 00:03:57.766 "params": { 00:03:57.766 "timeout_sec": 30 00:03:57.766 } 00:03:57.766 }, 00:03:57.766 { 00:03:57.766 "method": "bdev_nvme_set_options", 00:03:57.766 "params": { 00:03:57.766 "action_on_timeout": "none", 00:03:57.766 "timeout_us": 0, 00:03:57.766 "timeout_admin_us": 0, 00:03:57.766 "keep_alive_timeout_ms": 10000, 00:03:57.766 "arbitration_burst": 0, 00:03:57.766 "low_priority_weight": 0, 00:03:57.766 "medium_priority_weight": 0, 00:03:57.766 "high_priority_weight": 0, 00:03:57.766 "nvme_adminq_poll_period_us": 10000, 00:03:57.766 "nvme_ioq_poll_period_us": 0, 00:03:57.766 "io_queue_requests": 0, 00:03:57.766 "delay_cmd_submit": true, 00:03:57.766 "transport_retry_count": 4, 00:03:57.766 "bdev_retry_count": 3, 00:03:57.766 "transport_ack_timeout": 0, 00:03:57.766 "ctrlr_loss_timeout_sec": 0, 00:03:57.766 "reconnect_delay_sec": 0, 00:03:57.766 "fast_io_fail_timeout_sec": 0, 00:03:57.766 "disable_auto_failback": false, 00:03:57.766 "generate_uuids": false, 00:03:57.766 "transport_tos": 0, 00:03:57.766 "nvme_error_stat": false, 00:03:57.766 "rdma_srq_size": 0, 00:03:57.766 "io_path_stat": false, 00:03:57.766 "allow_accel_sequence": false, 00:03:57.766 "rdma_max_cq_size": 0, 00:03:57.766 "rdma_cm_event_timeout_ms": 0, 00:03:57.766 "dhchap_digests": [ 00:03:57.766 "sha256", 00:03:57.766 "sha384", 00:03:57.766 "sha512" 00:03:57.766 ], 00:03:57.766 "dhchap_dhgroups": [ 00:03:57.766 "null", 00:03:57.766 "ffdhe2048", 00:03:57.766 "ffdhe3072", 00:03:57.766 "ffdhe4096", 00:03:57.766 "ffdhe6144", 00:03:57.766 "ffdhe8192" 00:03:57.766 ] 00:03:57.766 } 00:03:57.766 }, 00:03:57.766 { 00:03:57.766 "method": "bdev_nvme_set_hotplug", 00:03:57.766 "params": { 00:03:57.766 "period_us": 100000, 00:03:57.766 "enable": false 00:03:57.766 } 00:03:57.766 }, 00:03:57.766 { 00:03:57.766 "method": "bdev_wait_for_examine" 00:03:57.766 } 00:03:57.766 ] 00:03:57.766 }, 00:03:57.766 { 00:03:57.766 "subsystem": "scsi", 00:03:57.766 "config": null 00:03:57.766 }, 00:03:57.766 { 00:03:57.766 "subsystem": "scheduler", 00:03:57.766 "config": [ 00:03:57.766 { 00:03:57.766 "method": "framework_set_scheduler", 00:03:57.766 "params": { 00:03:57.766 "name": "static" 00:03:57.766 } 00:03:57.767 } 00:03:57.767 ] 00:03:57.767 }, 00:03:57.767 { 00:03:57.767 "subsystem": "vhost_scsi", 00:03:57.767 "config": [] 00:03:57.767 }, 00:03:57.767 { 00:03:57.767 "subsystem": "vhost_blk", 00:03:57.767 "config": [] 00:03:57.767 }, 00:03:57.767 { 00:03:57.767 "subsystem": "ublk", 00:03:57.767 "config": [] 00:03:57.767 }, 00:03:57.767 { 00:03:57.767 "subsystem": "nbd", 00:03:57.767 "config": [] 00:03:57.767 }, 00:03:57.767 { 00:03:57.767 "subsystem": "nvmf", 00:03:57.767 "config": [ 00:03:57.767 { 00:03:57.767 "method": "nvmf_set_config", 00:03:57.767 "params": { 00:03:57.767 "discovery_filter": "match_any", 00:03:57.767 "admin_cmd_passthru": { 00:03:57.767 "identify_ctrlr": false 00:03:57.767 }, 00:03:57.767 "dhchap_digests": [ 00:03:57.767 "sha256", 00:03:57.767 "sha384", 00:03:57.767 "sha512" 00:03:57.767 ], 00:03:57.767 "dhchap_dhgroups": [ 00:03:57.767 "null", 00:03:57.767 "ffdhe2048", 00:03:57.767 "ffdhe3072", 00:03:57.767 "ffdhe4096", 00:03:57.767 "ffdhe6144", 00:03:57.767 "ffdhe8192" 00:03:57.767 ] 00:03:57.767 } 00:03:57.767 }, 00:03:57.767 { 00:03:57.767 "method": "nvmf_set_max_subsystems", 00:03:57.767 "params": { 00:03:57.767 "max_subsystems": 1024 00:03:57.767 } 00:03:57.767 }, 00:03:57.767 { 00:03:57.767 "method": "nvmf_set_crdt", 00:03:57.767 "params": { 00:03:57.767 "crdt1": 0, 00:03:57.767 "crdt2": 0, 00:03:57.767 "crdt3": 0 00:03:57.767 } 00:03:57.767 }, 00:03:57.767 { 00:03:57.767 "method": "nvmf_create_transport", 00:03:57.767 "params": { 00:03:57.767 "trtype": "TCP", 00:03:57.767 "max_queue_depth": 128, 00:03:57.767 "max_io_qpairs_per_ctrlr": 127, 00:03:57.767 "in_capsule_data_size": 4096, 00:03:57.767 "max_io_size": 131072, 00:03:57.767 "io_unit_size": 131072, 00:03:57.767 "max_aq_depth": 128, 00:03:57.767 "num_shared_buffers": 511, 00:03:57.767 "buf_cache_size": 4294967295, 00:03:57.767 "dif_insert_or_strip": false, 00:03:57.767 "zcopy": false, 00:03:57.767 "c2h_success": true, 00:03:57.767 "sock_priority": 0, 00:03:57.767 "abort_timeout_sec": 1, 00:03:57.767 "ack_timeout": 0, 00:03:57.767 "data_wr_pool_size": 0 00:03:57.767 } 00:03:57.767 } 00:03:57.767 ] 00:03:57.767 }, 00:03:57.767 { 00:03:57.767 "subsystem": "iscsi", 00:03:57.767 "config": [ 00:03:57.767 { 00:03:57.767 "method": "iscsi_set_options", 00:03:57.767 "params": { 00:03:57.767 "node_base": "iqn.2016-06.io.spdk", 00:03:57.767 "max_sessions": 128, 00:03:57.767 "max_connections_per_session": 2, 00:03:57.767 "max_queue_depth": 64, 00:03:57.767 "default_time2wait": 2, 00:03:57.767 "default_time2retain": 20, 00:03:57.767 "first_burst_length": 8192, 00:03:57.767 "immediate_data": true, 00:03:57.767 "allow_duplicated_isid": false, 00:03:57.767 "error_recovery_level": 0, 00:03:57.767 "nop_timeout": 60, 00:03:57.767 "nop_in_interval": 30, 00:03:57.767 "disable_chap": false, 00:03:57.767 "require_chap": false, 00:03:57.767 "mutual_chap": false, 00:03:57.767 "chap_group": 0, 00:03:57.767 "max_large_datain_per_connection": 64, 00:03:57.767 "max_r2t_per_connection": 4, 00:03:57.767 "pdu_pool_size": 36864, 00:03:57.767 "immediate_data_pool_size": 16384, 00:03:57.767 "data_out_pool_size": 2048 00:03:57.767 } 00:03:57.767 } 00:03:57.767 ] 00:03:57.767 } 00:03:57.767 ] 00:03:57.767 } 00:03:57.767 11:39:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:57.767 11:39:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 4000741 00:03:57.767 11:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 4000741 ']' 00:03:57.767 11:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 4000741 00:03:57.767 11:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:57.767 11:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:57.767 11:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4000741 00:03:57.767 11:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:57.767 11:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:57.767 11:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4000741' 00:03:57.767 killing process with pid 4000741 00:03:57.767 11:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 4000741 00:03:57.767 11:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 4000741 00:03:58.028 11:39:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=4001047 00:03:58.028 11:39:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:58.028 11:39:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:03.315 11:39:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 4001047 00:04:03.315 11:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 4001047 ']' 00:04:03.315 11:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 4001047 00:04:03.315 11:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:03.315 11:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:03.315 11:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4001047 00:04:03.315 11:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:03.315 11:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:03.315 11:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4001047' 00:04:03.315 killing process with pid 4001047 00:04:03.315 11:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 4001047 00:04:03.315 11:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 4001047 00:04:03.315 11:39:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:03.315 11:39:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:03.315 00:04:03.315 real 0m6.550s 00:04:03.315 user 0m6.427s 00:04:03.315 sys 0m0.577s 00:04:03.315 11:39:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.315 11:39:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:03.315 ************************************ 00:04:03.315 END TEST skip_rpc_with_json 00:04:03.315 ************************************ 00:04:03.315 11:39:11 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:03.315 11:39:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.315 11:39:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.315 11:39:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.315 ************************************ 00:04:03.315 START TEST skip_rpc_with_delay 00:04:03.315 ************************************ 00:04:03.315 11:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:03.315 11:39:11 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:03.315 11:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:03.315 11:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:03.315 11:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.315 11:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:03.315 11:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.315 11:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:03.315 11:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.315 11:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:03.315 11:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.315 11:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:03.315 11:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:03.576 [2024-12-09 11:39:11.213328] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:03.576 11:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:03.576 11:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:03.576 11:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:03.576 11:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:03.576 00:04:03.576 real 0m0.077s 00:04:03.576 user 0m0.049s 00:04:03.576 sys 0m0.027s 00:04:03.576 11:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.576 11:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:03.576 ************************************ 00:04:03.576 END TEST skip_rpc_with_delay 00:04:03.576 ************************************ 00:04:03.576 11:39:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:03.576 11:39:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:03.576 11:39:11 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:03.576 11:39:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.576 11:39:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.576 11:39:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.576 ************************************ 00:04:03.576 START TEST exit_on_failed_rpc_init 00:04:03.576 ************************************ 00:04:03.576 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:03.576 11:39:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=4002587 00:04:03.576 11:39:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 4002587 00:04:03.576 11:39:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:03.576 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 4002587 ']' 00:04:03.576 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.576 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:03.576 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.577 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:03.577 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:03.577 [2024-12-09 11:39:11.369844] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:03.577 [2024-12-09 11:39:11.369898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4002587 ] 00:04:03.577 [2024-12-09 11:39:11.421953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.577 [2024-12-09 11:39:11.453960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.837 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:03.837 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:03.837 11:39:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:03.837 11:39:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:03.837 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:03.838 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:03.838 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.838 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:03.838 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.838 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:03.838 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.838 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:03.838 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.838 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:03.838 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:03.838 [2024-12-09 11:39:11.689337] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:03.838 [2024-12-09 11:39:11.689390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4002610 ] 00:04:04.098 [2024-12-09 11:39:11.776543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.098 [2024-12-09 11:39:11.812206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:04.098 [2024-12-09 11:39:11.812257] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:04.098 [2024-12-09 11:39:11.812267] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:04.098 [2024-12-09 11:39:11.812273] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:04.098 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:04.098 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:04.098 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:04.098 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:04.098 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:04.098 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:04.098 11:39:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:04.098 11:39:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 4002587 00:04:04.098 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 4002587 ']' 00:04:04.098 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 4002587 00:04:04.098 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:04.098 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:04.098 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4002587 00:04:04.098 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:04.098 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:04.098 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4002587' 00:04:04.098 killing process with pid 4002587 00:04:04.098 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 4002587 00:04:04.098 11:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 4002587 00:04:04.359 00:04:04.359 real 0m0.784s 00:04:04.359 user 0m0.915s 00:04:04.359 sys 0m0.323s 00:04:04.359 11:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.359 11:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:04.359 ************************************ 00:04:04.359 END TEST exit_on_failed_rpc_init 00:04:04.359 ************************************ 00:04:04.359 11:39:12 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:04.359 00:04:04.359 real 0m13.195s 00:04:04.359 user 0m12.652s 00:04:04.359 sys 0m1.528s 00:04:04.359 11:39:12 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.359 11:39:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.359 ************************************ 00:04:04.359 END TEST skip_rpc 00:04:04.359 ************************************ 00:04:04.359 11:39:12 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:04.359 11:39:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.359 11:39:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.359 11:39:12 -- common/autotest_common.sh@10 -- # set +x 00:04:04.359 ************************************ 00:04:04.359 START TEST rpc_client 00:04:04.359 ************************************ 00:04:04.359 11:39:12 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:04.621 * Looking for test storage... 00:04:04.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:04.621 11:39:12 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:04.621 11:39:12 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:04.621 11:39:12 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:04.621 11:39:12 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.621 11:39:12 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:04.621 11:39:12 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.621 11:39:12 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:04.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.621 --rc genhtml_branch_coverage=1 00:04:04.621 --rc genhtml_function_coverage=1 00:04:04.621 --rc genhtml_legend=1 00:04:04.621 --rc geninfo_all_blocks=1 00:04:04.621 --rc geninfo_unexecuted_blocks=1 00:04:04.621 00:04:04.621 ' 00:04:04.621 11:39:12 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:04.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.621 --rc genhtml_branch_coverage=1 00:04:04.621 --rc genhtml_function_coverage=1 00:04:04.621 --rc genhtml_legend=1 00:04:04.621 --rc geninfo_all_blocks=1 00:04:04.621 --rc geninfo_unexecuted_blocks=1 00:04:04.621 00:04:04.621 ' 00:04:04.621 11:39:12 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:04.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.621 --rc genhtml_branch_coverage=1 00:04:04.621 --rc genhtml_function_coverage=1 00:04:04.621 --rc genhtml_legend=1 00:04:04.621 --rc geninfo_all_blocks=1 00:04:04.621 --rc geninfo_unexecuted_blocks=1 00:04:04.621 00:04:04.621 ' 00:04:04.621 11:39:12 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:04.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.621 --rc genhtml_branch_coverage=1 00:04:04.621 --rc genhtml_function_coverage=1 00:04:04.621 --rc genhtml_legend=1 00:04:04.621 --rc geninfo_all_blocks=1 00:04:04.621 --rc geninfo_unexecuted_blocks=1 00:04:04.621 00:04:04.621 ' 00:04:04.621 11:39:12 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:04.621 OK 00:04:04.621 11:39:12 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:04.621 00:04:04.621 real 0m0.230s 00:04:04.621 user 0m0.136s 00:04:04.621 sys 0m0.108s 00:04:04.621 11:39:12 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.621 11:39:12 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:04.621 ************************************ 00:04:04.621 END TEST rpc_client 00:04:04.621 ************************************ 00:04:04.621 11:39:12 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:04.621 11:39:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.621 11:39:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.621 11:39:12 -- common/autotest_common.sh@10 -- # set +x 00:04:04.883 ************************************ 00:04:04.883 START TEST json_config 00:04:04.883 ************************************ 00:04:04.883 11:39:12 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:04.883 11:39:12 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:04.883 11:39:12 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:04.883 11:39:12 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:04.883 11:39:12 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:04.883 11:39:12 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.883 11:39:12 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.883 11:39:12 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.883 11:39:12 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.883 11:39:12 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.883 11:39:12 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.883 11:39:12 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.883 11:39:12 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.883 11:39:12 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.883 11:39:12 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.883 11:39:12 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.883 11:39:12 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:04.883 11:39:12 json_config -- scripts/common.sh@345 -- # : 1 00:04:04.883 11:39:12 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.883 11:39:12 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.883 11:39:12 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:04.883 11:39:12 json_config -- scripts/common.sh@353 -- # local d=1 00:04:04.883 11:39:12 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.883 11:39:12 json_config -- scripts/common.sh@355 -- # echo 1 00:04:04.883 11:39:12 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.883 11:39:12 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:04.883 11:39:12 json_config -- scripts/common.sh@353 -- # local d=2 00:04:04.883 11:39:12 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.883 11:39:12 json_config -- scripts/common.sh@355 -- # echo 2 00:04:04.883 11:39:12 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.883 11:39:12 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.883 11:39:12 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.883 11:39:12 json_config -- scripts/common.sh@368 -- # return 0 00:04:04.883 11:39:12 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.883 11:39:12 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:04.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.883 --rc genhtml_branch_coverage=1 00:04:04.883 --rc genhtml_function_coverage=1 00:04:04.883 --rc genhtml_legend=1 00:04:04.883 --rc geninfo_all_blocks=1 00:04:04.883 --rc geninfo_unexecuted_blocks=1 00:04:04.883 00:04:04.883 ' 00:04:04.883 11:39:12 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:04.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.883 --rc genhtml_branch_coverage=1 00:04:04.883 --rc genhtml_function_coverage=1 00:04:04.883 --rc genhtml_legend=1 00:04:04.883 --rc geninfo_all_blocks=1 00:04:04.883 --rc geninfo_unexecuted_blocks=1 00:04:04.883 00:04:04.884 ' 00:04:04.884 11:39:12 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:04.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.884 --rc genhtml_branch_coverage=1 00:04:04.884 --rc genhtml_function_coverage=1 00:04:04.884 --rc genhtml_legend=1 00:04:04.884 --rc geninfo_all_blocks=1 00:04:04.884 --rc geninfo_unexecuted_blocks=1 00:04:04.884 00:04:04.884 ' 00:04:04.884 11:39:12 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:04.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.884 --rc genhtml_branch_coverage=1 00:04:04.884 --rc genhtml_function_coverage=1 00:04:04.884 --rc genhtml_legend=1 00:04:04.884 --rc geninfo_all_blocks=1 00:04:04.884 --rc geninfo_unexecuted_blocks=1 00:04:04.884 00:04:04.884 ' 00:04:04.884 11:39:12 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:04.884 11:39:12 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:04.884 11:39:12 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:04.884 11:39:12 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:04.884 11:39:12 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:04.884 11:39:12 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.884 11:39:12 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.884 11:39:12 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.884 11:39:12 json_config -- paths/export.sh@5 -- # export PATH 00:04:04.884 11:39:12 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@52 -- # : 0 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:04:04.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:04:04.884 11:39:12 json_config -- nvmf/common.sh@56 -- # have_pci_nics=0 00:04:04.884 11:39:12 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:04.884 11:39:12 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:04.884 11:39:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:04.884 11:39:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:04.884 11:39:12 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:04.884 11:39:12 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:04.884 11:39:12 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:04.884 11:39:12 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:04.884 11:39:12 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:04.884 11:39:12 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:04.884 11:39:12 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:04.884 11:39:12 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:04.884 11:39:12 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:04.884 11:39:12 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:04.884 11:39:12 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:04.884 11:39:12 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:04.884 INFO: JSON configuration test init 00:04:04.884 11:39:12 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:04.884 11:39:12 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:04.884 11:39:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.884 11:39:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.884 11:39:12 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:04.884 11:39:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.884 11:39:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.884 11:39:12 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:04.885 11:39:12 json_config -- json_config/common.sh@9 -- # local app=target 00:04:04.885 11:39:12 json_config -- json_config/common.sh@10 -- # shift 00:04:04.885 11:39:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:04.885 11:39:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:04.885 11:39:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:04.885 11:39:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.885 11:39:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.885 11:39:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4003078 00:04:04.885 11:39:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:04.885 Waiting for target to run... 00:04:04.885 11:39:12 json_config -- json_config/common.sh@25 -- # waitforlisten 4003078 /var/tmp/spdk_tgt.sock 00:04:04.885 11:39:12 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:04.885 11:39:12 json_config -- common/autotest_common.sh@835 -- # '[' -z 4003078 ']' 00:04:04.885 11:39:12 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:04.885 11:39:12 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:04.885 11:39:12 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:04.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:04.885 11:39:12 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:04.885 11:39:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.146 [2024-12-09 11:39:12.800066] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:05.146 [2024-12-09 11:39:12.800141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4003078 ] 00:04:05.406 [2024-12-09 11:39:13.061370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.406 [2024-12-09 11:39:13.086267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.977 11:39:13 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:05.977 11:39:13 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:05.977 11:39:13 json_config -- json_config/common.sh@26 -- # echo '' 00:04:05.977 00:04:05.977 11:39:13 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:05.977 11:39:13 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:05.977 11:39:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:05.977 11:39:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.977 11:39:13 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:05.977 11:39:13 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:05.977 11:39:13 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:05.977 11:39:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.977 11:39:13 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:05.977 11:39:13 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:05.977 11:39:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:06.549 11:39:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:06.549 11:39:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:06.549 11:39:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@54 -- # sort 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:06.549 11:39:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:06.549 11:39:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:06.549 11:39:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:06.549 11:39:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:06.549 11:39:14 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:06.549 11:39:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:06.811 MallocForNvmf0 00:04:06.811 11:39:14 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:06.811 11:39:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:07.072 MallocForNvmf1 00:04:07.072 11:39:14 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:07.072 11:39:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:07.072 [2024-12-09 11:39:14.906714] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:07.072 11:39:14 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:07.072 11:39:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:07.332 11:39:15 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:07.332 11:39:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:07.593 11:39:15 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:07.593 11:39:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:07.593 11:39:15 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:07.593 11:39:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:07.855 [2024-12-09 11:39:15.572720] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:07.855 11:39:15 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:07.855 11:39:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:07.855 11:39:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.855 11:39:15 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:07.855 11:39:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:07.855 11:39:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.855 11:39:15 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:07.855 11:39:15 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:07.855 11:39:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:08.116 MallocBdevForConfigChangeCheck 00:04:08.116 11:39:15 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:08.116 11:39:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:08.116 11:39:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.116 11:39:15 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:08.116 11:39:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:08.376 11:39:16 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:08.376 INFO: shutting down applications... 00:04:08.376 11:39:16 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:08.376 11:39:16 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:08.376 11:39:16 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:08.377 11:39:16 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:08.947 Calling clear_iscsi_subsystem 00:04:08.947 Calling clear_nvmf_subsystem 00:04:08.947 Calling clear_nbd_subsystem 00:04:08.947 Calling clear_ublk_subsystem 00:04:08.947 Calling clear_vhost_blk_subsystem 00:04:08.947 Calling clear_vhost_scsi_subsystem 00:04:08.947 Calling clear_bdev_subsystem 00:04:08.947 11:39:16 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:08.947 11:39:16 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:08.947 11:39:16 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:08.947 11:39:16 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:08.947 11:39:16 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:08.947 11:39:16 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:09.212 11:39:16 json_config -- json_config/json_config.sh@352 -- # break 00:04:09.212 11:39:16 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:09.212 11:39:16 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:09.212 11:39:16 json_config -- json_config/common.sh@31 -- # local app=target 00:04:09.212 11:39:16 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:09.212 11:39:16 json_config -- json_config/common.sh@35 -- # [[ -n 4003078 ]] 00:04:09.212 11:39:16 json_config -- json_config/common.sh@38 -- # kill -SIGINT 4003078 00:04:09.212 11:39:16 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:09.212 11:39:16 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:09.212 11:39:16 json_config -- json_config/common.sh@41 -- # kill -0 4003078 00:04:09.212 11:39:16 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:09.785 11:39:17 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:09.785 11:39:17 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:09.785 11:39:17 json_config -- json_config/common.sh@41 -- # kill -0 4003078 00:04:09.785 11:39:17 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:09.785 11:39:17 json_config -- json_config/common.sh@43 -- # break 00:04:09.785 11:39:17 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:09.785 11:39:17 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:09.785 SPDK target shutdown done 00:04:09.785 11:39:17 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:09.785 INFO: relaunching applications... 00:04:09.785 11:39:17 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:09.785 11:39:17 json_config -- json_config/common.sh@9 -- # local app=target 00:04:09.785 11:39:17 json_config -- json_config/common.sh@10 -- # shift 00:04:09.785 11:39:17 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:09.785 11:39:17 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:09.785 11:39:17 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:09.785 11:39:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:09.785 11:39:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:09.785 11:39:17 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4004112 00:04:09.785 11:39:17 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:09.785 Waiting for target to run... 00:04:09.785 11:39:17 json_config -- json_config/common.sh@25 -- # waitforlisten 4004112 /var/tmp/spdk_tgt.sock 00:04:09.785 11:39:17 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:09.785 11:39:17 json_config -- common/autotest_common.sh@835 -- # '[' -z 4004112 ']' 00:04:09.785 11:39:17 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:09.785 11:39:17 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:09.785 11:39:17 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:09.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:09.785 11:39:17 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:09.785 11:39:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.785 [2024-12-09 11:39:17.524273] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:09.785 [2024-12-09 11:39:17.524337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4004112 ] 00:04:10.045 [2024-12-09 11:39:17.794797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.045 [2024-12-09 11:39:17.819607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.616 [2024-12-09 11:39:18.317060] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:10.616 [2024-12-09 11:39:18.349413] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:10.616 11:39:18 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:10.616 11:39:18 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:10.616 11:39:18 json_config -- json_config/common.sh@26 -- # echo '' 00:04:10.616 00:04:10.616 11:39:18 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:10.616 11:39:18 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:10.616 INFO: Checking if target configuration is the same... 00:04:10.616 11:39:18 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:10.616 11:39:18 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:10.616 11:39:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:10.616 + '[' 2 -ne 2 ']' 00:04:10.616 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:10.616 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:10.616 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:10.616 +++ basename /dev/fd/62 00:04:10.616 ++ mktemp /tmp/62.XXX 00:04:10.616 + tmp_file_1=/tmp/62.D6v 00:04:10.616 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:10.616 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:10.616 + tmp_file_2=/tmp/spdk_tgt_config.json.9DS 00:04:10.616 + ret=0 00:04:10.616 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:10.877 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:10.877 + diff -u /tmp/62.D6v /tmp/spdk_tgt_config.json.9DS 00:04:10.877 + echo 'INFO: JSON config files are the same' 00:04:10.877 INFO: JSON config files are the same 00:04:10.877 + rm /tmp/62.D6v /tmp/spdk_tgt_config.json.9DS 00:04:11.138 + exit 0 00:04:11.138 11:39:18 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:11.138 11:39:18 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:11.138 INFO: changing configuration and checking if this can be detected... 00:04:11.138 11:39:18 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:11.138 11:39:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:11.138 11:39:18 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:11.138 11:39:18 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:11.138 11:39:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:11.138 + '[' 2 -ne 2 ']' 00:04:11.138 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:11.138 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:11.138 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:11.138 +++ basename /dev/fd/62 00:04:11.138 ++ mktemp /tmp/62.XXX 00:04:11.138 + tmp_file_1=/tmp/62.I3p 00:04:11.138 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:11.138 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:11.138 + tmp_file_2=/tmp/spdk_tgt_config.json.qeN 00:04:11.138 + ret=0 00:04:11.138 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:11.399 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:11.660 + diff -u /tmp/62.I3p /tmp/spdk_tgt_config.json.qeN 00:04:11.660 + ret=1 00:04:11.660 + echo '=== Start of file: /tmp/62.I3p ===' 00:04:11.660 + cat /tmp/62.I3p 00:04:11.660 + echo '=== End of file: /tmp/62.I3p ===' 00:04:11.660 + echo '' 00:04:11.660 + echo '=== Start of file: /tmp/spdk_tgt_config.json.qeN ===' 00:04:11.660 + cat /tmp/spdk_tgt_config.json.qeN 00:04:11.660 + echo '=== End of file: /tmp/spdk_tgt_config.json.qeN ===' 00:04:11.660 + echo '' 00:04:11.660 + rm /tmp/62.I3p /tmp/spdk_tgt_config.json.qeN 00:04:11.660 + exit 1 00:04:11.660 11:39:19 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:11.660 INFO: configuration change detected. 00:04:11.660 11:39:19 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:11.660 11:39:19 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:11.660 11:39:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:11.660 11:39:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.660 11:39:19 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:11.660 11:39:19 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:11.660 11:39:19 json_config -- json_config/json_config.sh@324 -- # [[ -n 4004112 ]] 00:04:11.660 11:39:19 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:11.660 11:39:19 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:11.660 11:39:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:11.660 11:39:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.660 11:39:19 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:11.660 11:39:19 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:11.660 11:39:19 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:11.660 11:39:19 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:11.660 11:39:19 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:11.660 11:39:19 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:11.660 11:39:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:11.660 11:39:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.660 11:39:19 json_config -- json_config/json_config.sh@330 -- # killprocess 4004112 00:04:11.660 11:39:19 json_config -- common/autotest_common.sh@954 -- # '[' -z 4004112 ']' 00:04:11.660 11:39:19 json_config -- common/autotest_common.sh@958 -- # kill -0 4004112 00:04:11.660 11:39:19 json_config -- common/autotest_common.sh@959 -- # uname 00:04:11.660 11:39:19 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:11.660 11:39:19 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4004112 00:04:11.660 11:39:19 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:11.660 11:39:19 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:11.660 11:39:19 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4004112' 00:04:11.660 killing process with pid 4004112 00:04:11.660 11:39:19 json_config -- common/autotest_common.sh@973 -- # kill 4004112 00:04:11.660 11:39:19 json_config -- common/autotest_common.sh@978 -- # wait 4004112 00:04:11.921 11:39:19 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:11.921 11:39:19 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:11.921 11:39:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:11.921 11:39:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.921 11:39:19 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:11.921 11:39:19 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:11.921 INFO: Success 00:04:11.921 00:04:11.921 real 0m7.224s 00:04:11.921 user 0m8.779s 00:04:11.921 sys 0m1.841s 00:04:11.921 11:39:19 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.921 11:39:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.921 ************************************ 00:04:11.921 END TEST json_config 00:04:11.921 ************************************ 00:04:11.921 11:39:19 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:11.921 11:39:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.921 11:39:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.921 11:39:19 -- common/autotest_common.sh@10 -- # set +x 00:04:12.184 ************************************ 00:04:12.184 START TEST json_config_extra_key 00:04:12.184 ************************************ 00:04:12.184 11:39:19 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:12.184 11:39:19 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:12.184 11:39:19 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:12.184 11:39:19 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:12.184 11:39:19 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.184 11:39:19 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:12.184 11:39:19 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.184 11:39:19 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:12.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.184 --rc genhtml_branch_coverage=1 00:04:12.184 --rc genhtml_function_coverage=1 00:04:12.184 --rc genhtml_legend=1 00:04:12.184 --rc geninfo_all_blocks=1 00:04:12.184 --rc geninfo_unexecuted_blocks=1 00:04:12.184 00:04:12.184 ' 00:04:12.184 11:39:19 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:12.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.184 --rc genhtml_branch_coverage=1 00:04:12.184 --rc genhtml_function_coverage=1 00:04:12.184 --rc genhtml_legend=1 00:04:12.184 --rc geninfo_all_blocks=1 00:04:12.184 --rc geninfo_unexecuted_blocks=1 00:04:12.184 00:04:12.184 ' 00:04:12.184 11:39:19 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:12.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.184 --rc genhtml_branch_coverage=1 00:04:12.184 --rc genhtml_function_coverage=1 00:04:12.184 --rc genhtml_legend=1 00:04:12.184 --rc geninfo_all_blocks=1 00:04:12.184 --rc geninfo_unexecuted_blocks=1 00:04:12.184 00:04:12.184 ' 00:04:12.184 11:39:19 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:12.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.184 --rc genhtml_branch_coverage=1 00:04:12.184 --rc genhtml_function_coverage=1 00:04:12.184 --rc genhtml_legend=1 00:04:12.184 --rc geninfo_all_blocks=1 00:04:12.184 --rc geninfo_unexecuted_blocks=1 00:04:12.184 00:04:12.184 ' 00:04:12.184 11:39:19 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:12.184 11:39:19 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:12.184 11:39:19 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:12.184 11:39:19 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:12.184 11:39:19 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:12.184 11:39:19 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:12.185 11:39:19 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:12.185 11:39:19 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:12.185 11:39:19 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:12.185 11:39:19 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:12.185 11:39:19 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:12.185 11:39:19 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:12.185 11:39:20 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:12.185 11:39:20 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:12.185 11:39:20 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:12.185 11:39:20 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:12.185 11:39:20 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:12.185 11:39:20 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:12.185 11:39:20 json_config_extra_key -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:04:12.185 11:39:20 json_config_extra_key -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:12.185 11:39:20 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:12.185 11:39:20 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:12.185 11:39:20 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:12.185 11:39:20 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:12.185 11:39:20 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.185 11:39:20 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.185 11:39:20 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.185 11:39:20 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:12.185 11:39:20 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.185 11:39:20 json_config_extra_key -- nvmf/common.sh@52 -- # : 0 00:04:12.185 11:39:20 json_config_extra_key -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:04:12.185 11:39:20 json_config_extra_key -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:04:12.185 11:39:20 json_config_extra_key -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:04:12.185 11:39:20 json_config_extra_key -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:12.185 11:39:20 json_config_extra_key -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:12.185 11:39:20 json_config_extra_key -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:04:12.185 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:04:12.185 11:39:20 json_config_extra_key -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:04:12.185 11:39:20 json_config_extra_key -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:04:12.185 11:39:20 json_config_extra_key -- nvmf/common.sh@56 -- # have_pci_nics=0 00:04:12.185 11:39:20 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:12.185 11:39:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:12.185 11:39:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:12.185 11:39:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:12.185 11:39:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:12.185 11:39:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:12.185 11:39:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:12.185 11:39:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:12.185 11:39:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:12.185 11:39:20 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:12.185 11:39:20 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:12.185 INFO: launching applications... 00:04:12.185 11:39:20 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:12.185 11:39:20 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:12.185 11:39:20 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:12.185 11:39:20 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:12.185 11:39:20 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:12.185 11:39:20 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:12.185 11:39:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:12.185 11:39:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:12.185 11:39:20 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=4004681 00:04:12.185 11:39:20 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:12.185 Waiting for target to run... 00:04:12.185 11:39:20 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 4004681 /var/tmp/spdk_tgt.sock 00:04:12.185 11:39:20 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 4004681 ']' 00:04:12.185 11:39:20 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:12.185 11:39:20 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:12.185 11:39:20 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:12.185 11:39:20 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:12.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:12.185 11:39:20 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:12.185 11:39:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:12.447 [2024-12-09 11:39:20.090505] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:12.447 [2024-12-09 11:39:20.090586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4004681 ] 00:04:12.708 [2024-12-09 11:39:20.373244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.708 [2024-12-09 11:39:20.401914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.281 11:39:20 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.281 11:39:20 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:13.281 11:39:20 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:13.281 00:04:13.281 11:39:20 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:13.281 INFO: shutting down applications... 00:04:13.281 11:39:20 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:13.281 11:39:20 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:13.281 11:39:20 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:13.281 11:39:20 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 4004681 ]] 00:04:13.281 11:39:20 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 4004681 00:04:13.281 11:39:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:13.281 11:39:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:13.281 11:39:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4004681 00:04:13.281 11:39:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:13.542 11:39:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:13.542 11:39:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:13.542 11:39:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4004681 00:04:13.542 11:39:21 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:13.542 11:39:21 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:13.542 11:39:21 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:13.542 11:39:21 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:13.542 SPDK target shutdown done 00:04:13.542 11:39:21 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:13.542 Success 00:04:13.542 00:04:13.542 real 0m1.568s 00:04:13.542 user 0m1.179s 00:04:13.542 sys 0m0.406s 00:04:13.542 11:39:21 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.542 11:39:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:13.542 ************************************ 00:04:13.542 END TEST json_config_extra_key 00:04:13.542 ************************************ 00:04:13.803 11:39:21 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:13.803 11:39:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.803 11:39:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.803 11:39:21 -- common/autotest_common.sh@10 -- # set +x 00:04:13.803 ************************************ 00:04:13.803 START TEST alias_rpc 00:04:13.803 ************************************ 00:04:13.803 11:39:21 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:13.803 * Looking for test storage... 00:04:13.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:13.803 11:39:21 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:13.803 11:39:21 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:13.803 11:39:21 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:13.803 11:39:21 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.803 11:39:21 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:13.803 11:39:21 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.803 11:39:21 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:13.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.803 --rc genhtml_branch_coverage=1 00:04:13.803 --rc genhtml_function_coverage=1 00:04:13.803 --rc genhtml_legend=1 00:04:13.803 --rc geninfo_all_blocks=1 00:04:13.803 --rc geninfo_unexecuted_blocks=1 00:04:13.803 00:04:13.803 ' 00:04:13.803 11:39:21 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:13.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.803 --rc genhtml_branch_coverage=1 00:04:13.803 --rc genhtml_function_coverage=1 00:04:13.803 --rc genhtml_legend=1 00:04:13.803 --rc geninfo_all_blocks=1 00:04:13.803 --rc geninfo_unexecuted_blocks=1 00:04:13.803 00:04:13.803 ' 00:04:13.803 11:39:21 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:13.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.803 --rc genhtml_branch_coverage=1 00:04:13.803 --rc genhtml_function_coverage=1 00:04:13.803 --rc genhtml_legend=1 00:04:13.803 --rc geninfo_all_blocks=1 00:04:13.803 --rc geninfo_unexecuted_blocks=1 00:04:13.803 00:04:13.803 ' 00:04:13.803 11:39:21 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:13.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.803 --rc genhtml_branch_coverage=1 00:04:13.803 --rc genhtml_function_coverage=1 00:04:13.803 --rc genhtml_legend=1 00:04:13.803 --rc geninfo_all_blocks=1 00:04:13.803 --rc geninfo_unexecuted_blocks=1 00:04:13.803 00:04:13.803 ' 00:04:13.803 11:39:21 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:13.803 11:39:21 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4005073 00:04:13.803 11:39:21 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4005073 00:04:13.803 11:39:21 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 4005073 ']' 00:04:13.803 11:39:21 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.803 11:39:21 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.803 11:39:21 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.803 11:39:21 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.803 11:39:21 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.803 11:39:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.064 [2024-12-09 11:39:21.727050] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:14.064 [2024-12-09 11:39:21.727127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4005073 ] 00:04:14.064 [2024-12-09 11:39:21.810918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.064 [2024-12-09 11:39:21.845924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.636 11:39:22 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:14.636 11:39:22 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:14.636 11:39:22 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:14.896 11:39:22 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4005073 00:04:14.896 11:39:22 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 4005073 ']' 00:04:14.896 11:39:22 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 4005073 00:04:14.896 11:39:22 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:14.896 11:39:22 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.896 11:39:22 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4005073 00:04:14.896 11:39:22 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:14.896 11:39:22 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:14.896 11:39:22 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4005073' 00:04:14.896 killing process with pid 4005073 00:04:14.896 11:39:22 alias_rpc -- common/autotest_common.sh@973 -- # kill 4005073 00:04:14.896 11:39:22 alias_rpc -- common/autotest_common.sh@978 -- # wait 4005073 00:04:15.157 00:04:15.157 real 0m1.492s 00:04:15.157 user 0m1.632s 00:04:15.157 sys 0m0.422s 00:04:15.157 11:39:22 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.157 11:39:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.157 ************************************ 00:04:15.157 END TEST alias_rpc 00:04:15.157 ************************************ 00:04:15.157 11:39:22 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:15.157 11:39:22 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:15.157 11:39:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.157 11:39:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.157 11:39:22 -- common/autotest_common.sh@10 -- # set +x 00:04:15.157 ************************************ 00:04:15.157 START TEST spdkcli_tcp 00:04:15.157 ************************************ 00:04:15.157 11:39:23 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:15.419 * Looking for test storage... 00:04:15.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:15.419 11:39:23 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:15.419 11:39:23 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:15.419 11:39:23 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:15.419 11:39:23 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.419 11:39:23 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:15.419 11:39:23 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.419 11:39:23 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:15.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.419 --rc genhtml_branch_coverage=1 00:04:15.419 --rc genhtml_function_coverage=1 00:04:15.419 --rc genhtml_legend=1 00:04:15.419 --rc geninfo_all_blocks=1 00:04:15.419 --rc geninfo_unexecuted_blocks=1 00:04:15.419 00:04:15.419 ' 00:04:15.419 11:39:23 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:15.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.419 --rc genhtml_branch_coverage=1 00:04:15.419 --rc genhtml_function_coverage=1 00:04:15.419 --rc genhtml_legend=1 00:04:15.419 --rc geninfo_all_blocks=1 00:04:15.419 --rc geninfo_unexecuted_blocks=1 00:04:15.419 00:04:15.419 ' 00:04:15.419 11:39:23 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:15.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.419 --rc genhtml_branch_coverage=1 00:04:15.419 --rc genhtml_function_coverage=1 00:04:15.419 --rc genhtml_legend=1 00:04:15.419 --rc geninfo_all_blocks=1 00:04:15.419 --rc geninfo_unexecuted_blocks=1 00:04:15.419 00:04:15.419 ' 00:04:15.419 11:39:23 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:15.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.419 --rc genhtml_branch_coverage=1 00:04:15.419 --rc genhtml_function_coverage=1 00:04:15.419 --rc genhtml_legend=1 00:04:15.419 --rc geninfo_all_blocks=1 00:04:15.419 --rc geninfo_unexecuted_blocks=1 00:04:15.419 00:04:15.419 ' 00:04:15.419 11:39:23 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:15.419 11:39:23 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:15.419 11:39:23 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:15.419 11:39:23 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:15.419 11:39:23 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:15.419 11:39:23 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:15.419 11:39:23 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:15.419 11:39:23 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.419 11:39:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:15.419 11:39:23 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4005471 00:04:15.419 11:39:23 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 4005471 00:04:15.419 11:39:23 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:15.419 11:39:23 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 4005471 ']' 00:04:15.419 11:39:23 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.419 11:39:23 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:15.419 11:39:23 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.419 11:39:23 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:15.419 11:39:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:15.419 [2024-12-09 11:39:23.302455] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:15.420 [2024-12-09 11:39:23.302529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4005471 ] 00:04:15.680 [2024-12-09 11:39:23.391278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:15.680 [2024-12-09 11:39:23.433090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.680 [2024-12-09 11:39:23.433092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.251 11:39:24 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:16.251 11:39:24 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:16.251 11:39:24 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=4005658 00:04:16.251 11:39:24 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:16.251 11:39:24 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:16.512 [ 00:04:16.512 "bdev_malloc_delete", 00:04:16.512 "bdev_malloc_create", 00:04:16.512 "bdev_null_resize", 00:04:16.512 "bdev_null_delete", 00:04:16.512 "bdev_null_create", 00:04:16.512 "bdev_nvme_cuse_unregister", 00:04:16.512 "bdev_nvme_cuse_register", 00:04:16.512 "bdev_opal_new_user", 00:04:16.512 "bdev_opal_set_lock_state", 00:04:16.512 "bdev_opal_delete", 00:04:16.512 "bdev_opal_get_info", 00:04:16.512 "bdev_opal_create", 00:04:16.512 "bdev_nvme_opal_revert", 00:04:16.512 "bdev_nvme_opal_init", 00:04:16.512 "bdev_nvme_send_cmd", 00:04:16.512 "bdev_nvme_set_keys", 00:04:16.512 "bdev_nvme_get_path_iostat", 00:04:16.512 "bdev_nvme_get_mdns_discovery_info", 00:04:16.512 "bdev_nvme_stop_mdns_discovery", 00:04:16.512 "bdev_nvme_start_mdns_discovery", 00:04:16.512 "bdev_nvme_set_multipath_policy", 00:04:16.512 "bdev_nvme_set_preferred_path", 00:04:16.512 "bdev_nvme_get_io_paths", 00:04:16.512 "bdev_nvme_remove_error_injection", 00:04:16.512 "bdev_nvme_add_error_injection", 00:04:16.512 "bdev_nvme_get_discovery_info", 00:04:16.512 "bdev_nvme_stop_discovery", 00:04:16.512 "bdev_nvme_start_discovery", 00:04:16.512 "bdev_nvme_get_controller_health_info", 00:04:16.512 "bdev_nvme_disable_controller", 00:04:16.512 "bdev_nvme_enable_controller", 00:04:16.512 "bdev_nvme_reset_controller", 00:04:16.512 "bdev_nvme_get_transport_statistics", 00:04:16.512 "bdev_nvme_apply_firmware", 00:04:16.512 "bdev_nvme_detach_controller", 00:04:16.512 "bdev_nvme_get_controllers", 00:04:16.512 "bdev_nvme_attach_controller", 00:04:16.512 "bdev_nvme_set_hotplug", 00:04:16.512 "bdev_nvme_set_options", 00:04:16.512 "bdev_passthru_delete", 00:04:16.512 "bdev_passthru_create", 00:04:16.512 "bdev_lvol_set_parent_bdev", 00:04:16.512 "bdev_lvol_set_parent", 00:04:16.512 "bdev_lvol_check_shallow_copy", 00:04:16.512 "bdev_lvol_start_shallow_copy", 00:04:16.512 "bdev_lvol_grow_lvstore", 00:04:16.512 "bdev_lvol_get_lvols", 00:04:16.512 "bdev_lvol_get_lvstores", 00:04:16.512 "bdev_lvol_delete", 00:04:16.512 "bdev_lvol_set_read_only", 00:04:16.512 "bdev_lvol_resize", 00:04:16.512 "bdev_lvol_decouple_parent", 00:04:16.512 "bdev_lvol_inflate", 00:04:16.512 "bdev_lvol_rename", 00:04:16.512 "bdev_lvol_clone_bdev", 00:04:16.512 "bdev_lvol_clone", 00:04:16.512 "bdev_lvol_snapshot", 00:04:16.512 "bdev_lvol_create", 00:04:16.512 "bdev_lvol_delete_lvstore", 00:04:16.512 "bdev_lvol_rename_lvstore", 00:04:16.512 "bdev_lvol_create_lvstore", 00:04:16.512 "bdev_raid_set_options", 00:04:16.512 "bdev_raid_remove_base_bdev", 00:04:16.512 "bdev_raid_add_base_bdev", 00:04:16.512 "bdev_raid_delete", 00:04:16.512 "bdev_raid_create", 00:04:16.512 "bdev_raid_get_bdevs", 00:04:16.512 "bdev_error_inject_error", 00:04:16.512 "bdev_error_delete", 00:04:16.512 "bdev_error_create", 00:04:16.512 "bdev_split_delete", 00:04:16.512 "bdev_split_create", 00:04:16.512 "bdev_delay_delete", 00:04:16.512 "bdev_delay_create", 00:04:16.512 "bdev_delay_update_latency", 00:04:16.512 "bdev_zone_block_delete", 00:04:16.512 "bdev_zone_block_create", 00:04:16.513 "blobfs_create", 00:04:16.513 "blobfs_detect", 00:04:16.513 "blobfs_set_cache_size", 00:04:16.513 "bdev_aio_delete", 00:04:16.513 "bdev_aio_rescan", 00:04:16.513 "bdev_aio_create", 00:04:16.513 "bdev_ftl_set_property", 00:04:16.513 "bdev_ftl_get_properties", 00:04:16.513 "bdev_ftl_get_stats", 00:04:16.513 "bdev_ftl_unmap", 00:04:16.513 "bdev_ftl_unload", 00:04:16.513 "bdev_ftl_delete", 00:04:16.513 "bdev_ftl_load", 00:04:16.513 "bdev_ftl_create", 00:04:16.513 "bdev_virtio_attach_controller", 00:04:16.513 "bdev_virtio_scsi_get_devices", 00:04:16.513 "bdev_virtio_detach_controller", 00:04:16.513 "bdev_virtio_blk_set_hotplug", 00:04:16.513 "bdev_iscsi_delete", 00:04:16.513 "bdev_iscsi_create", 00:04:16.513 "bdev_iscsi_set_options", 00:04:16.513 "accel_error_inject_error", 00:04:16.513 "ioat_scan_accel_module", 00:04:16.513 "dsa_scan_accel_module", 00:04:16.513 "iaa_scan_accel_module", 00:04:16.513 "vfu_virtio_create_fs_endpoint", 00:04:16.513 "vfu_virtio_create_scsi_endpoint", 00:04:16.513 "vfu_virtio_scsi_remove_target", 00:04:16.513 "vfu_virtio_scsi_add_target", 00:04:16.513 "vfu_virtio_create_blk_endpoint", 00:04:16.513 "vfu_virtio_delete_endpoint", 00:04:16.513 "keyring_file_remove_key", 00:04:16.513 "keyring_file_add_key", 00:04:16.513 "keyring_linux_set_options", 00:04:16.513 "fsdev_aio_delete", 00:04:16.513 "fsdev_aio_create", 00:04:16.513 "iscsi_get_histogram", 00:04:16.513 "iscsi_enable_histogram", 00:04:16.513 "iscsi_set_options", 00:04:16.513 "iscsi_get_auth_groups", 00:04:16.513 "iscsi_auth_group_remove_secret", 00:04:16.513 "iscsi_auth_group_add_secret", 00:04:16.513 "iscsi_delete_auth_group", 00:04:16.513 "iscsi_create_auth_group", 00:04:16.513 "iscsi_set_discovery_auth", 00:04:16.513 "iscsi_get_options", 00:04:16.513 "iscsi_target_node_request_logout", 00:04:16.513 "iscsi_target_node_set_redirect", 00:04:16.513 "iscsi_target_node_set_auth", 00:04:16.513 "iscsi_target_node_add_lun", 00:04:16.513 "iscsi_get_stats", 00:04:16.513 "iscsi_get_connections", 00:04:16.513 "iscsi_portal_group_set_auth", 00:04:16.513 "iscsi_start_portal_group", 00:04:16.513 "iscsi_delete_portal_group", 00:04:16.513 "iscsi_create_portal_group", 00:04:16.513 "iscsi_get_portal_groups", 00:04:16.513 "iscsi_delete_target_node", 00:04:16.513 "iscsi_target_node_remove_pg_ig_maps", 00:04:16.513 "iscsi_target_node_add_pg_ig_maps", 00:04:16.513 "iscsi_create_target_node", 00:04:16.513 "iscsi_get_target_nodes", 00:04:16.513 "iscsi_delete_initiator_group", 00:04:16.513 "iscsi_initiator_group_remove_initiators", 00:04:16.513 "iscsi_initiator_group_add_initiators", 00:04:16.513 "iscsi_create_initiator_group", 00:04:16.513 "iscsi_get_initiator_groups", 00:04:16.513 "nvmf_set_crdt", 00:04:16.513 "nvmf_set_config", 00:04:16.513 "nvmf_set_max_subsystems", 00:04:16.513 "nvmf_stop_mdns_prr", 00:04:16.513 "nvmf_publish_mdns_prr", 00:04:16.513 "nvmf_subsystem_get_listeners", 00:04:16.513 "nvmf_subsystem_get_qpairs", 00:04:16.513 "nvmf_subsystem_get_controllers", 00:04:16.513 "nvmf_get_stats", 00:04:16.513 "nvmf_get_transports", 00:04:16.513 "nvmf_create_transport", 00:04:16.513 "nvmf_get_targets", 00:04:16.513 "nvmf_delete_target", 00:04:16.513 "nvmf_create_target", 00:04:16.513 "nvmf_subsystem_allow_any_host", 00:04:16.513 "nvmf_subsystem_set_keys", 00:04:16.513 "nvmf_subsystem_remove_host", 00:04:16.513 "nvmf_subsystem_add_host", 00:04:16.513 "nvmf_ns_remove_host", 00:04:16.513 "nvmf_ns_add_host", 00:04:16.513 "nvmf_subsystem_remove_ns", 00:04:16.513 "nvmf_subsystem_set_ns_ana_group", 00:04:16.513 "nvmf_subsystem_add_ns", 00:04:16.513 "nvmf_subsystem_listener_set_ana_state", 00:04:16.513 "nvmf_discovery_get_referrals", 00:04:16.513 "nvmf_discovery_remove_referral", 00:04:16.513 "nvmf_discovery_add_referral", 00:04:16.513 "nvmf_subsystem_remove_listener", 00:04:16.513 "nvmf_subsystem_add_listener", 00:04:16.513 "nvmf_delete_subsystem", 00:04:16.513 "nvmf_create_subsystem", 00:04:16.513 "nvmf_get_subsystems", 00:04:16.513 "env_dpdk_get_mem_stats", 00:04:16.513 "nbd_get_disks", 00:04:16.513 "nbd_stop_disk", 00:04:16.513 "nbd_start_disk", 00:04:16.513 "ublk_recover_disk", 00:04:16.513 "ublk_get_disks", 00:04:16.513 "ublk_stop_disk", 00:04:16.513 "ublk_start_disk", 00:04:16.513 "ublk_destroy_target", 00:04:16.513 "ublk_create_target", 00:04:16.513 "virtio_blk_create_transport", 00:04:16.513 "virtio_blk_get_transports", 00:04:16.513 "vhost_controller_set_coalescing", 00:04:16.513 "vhost_get_controllers", 00:04:16.513 "vhost_delete_controller", 00:04:16.513 "vhost_create_blk_controller", 00:04:16.513 "vhost_scsi_controller_remove_target", 00:04:16.513 "vhost_scsi_controller_add_target", 00:04:16.513 "vhost_start_scsi_controller", 00:04:16.513 "vhost_create_scsi_controller", 00:04:16.513 "thread_set_cpumask", 00:04:16.513 "scheduler_set_options", 00:04:16.513 "framework_get_governor", 00:04:16.513 "framework_get_scheduler", 00:04:16.513 "framework_set_scheduler", 00:04:16.513 "framework_get_reactors", 00:04:16.513 "thread_get_io_channels", 00:04:16.513 "thread_get_pollers", 00:04:16.513 "thread_get_stats", 00:04:16.513 "framework_monitor_context_switch", 00:04:16.513 "spdk_kill_instance", 00:04:16.513 "log_enable_timestamps", 00:04:16.513 "log_get_flags", 00:04:16.513 "log_clear_flag", 00:04:16.513 "log_set_flag", 00:04:16.513 "log_get_level", 00:04:16.513 "log_set_level", 00:04:16.513 "log_get_print_level", 00:04:16.513 "log_set_print_level", 00:04:16.513 "framework_enable_cpumask_locks", 00:04:16.513 "framework_disable_cpumask_locks", 00:04:16.513 "framework_wait_init", 00:04:16.513 "framework_start_init", 00:04:16.513 "scsi_get_devices", 00:04:16.513 "bdev_get_histogram", 00:04:16.513 "bdev_enable_histogram", 00:04:16.513 "bdev_set_qos_limit", 00:04:16.513 "bdev_set_qd_sampling_period", 00:04:16.513 "bdev_get_bdevs", 00:04:16.513 "bdev_reset_iostat", 00:04:16.513 "bdev_get_iostat", 00:04:16.513 "bdev_examine", 00:04:16.513 "bdev_wait_for_examine", 00:04:16.513 "bdev_set_options", 00:04:16.513 "accel_get_stats", 00:04:16.513 "accel_set_options", 00:04:16.513 "accel_set_driver", 00:04:16.513 "accel_crypto_key_destroy", 00:04:16.513 "accel_crypto_keys_get", 00:04:16.513 "accel_crypto_key_create", 00:04:16.513 "accel_assign_opc", 00:04:16.513 "accel_get_module_info", 00:04:16.513 "accel_get_opc_assignments", 00:04:16.513 "vmd_rescan", 00:04:16.513 "vmd_remove_device", 00:04:16.513 "vmd_enable", 00:04:16.513 "sock_get_default_impl", 00:04:16.513 "sock_set_default_impl", 00:04:16.513 "sock_impl_set_options", 00:04:16.513 "sock_impl_get_options", 00:04:16.513 "iobuf_get_stats", 00:04:16.513 "iobuf_set_options", 00:04:16.513 "keyring_get_keys", 00:04:16.513 "vfu_tgt_set_base_path", 00:04:16.513 "framework_get_pci_devices", 00:04:16.513 "framework_get_config", 00:04:16.513 "framework_get_subsystems", 00:04:16.513 "fsdev_set_opts", 00:04:16.513 "fsdev_get_opts", 00:04:16.513 "trace_get_info", 00:04:16.513 "trace_get_tpoint_group_mask", 00:04:16.513 "trace_disable_tpoint_group", 00:04:16.513 "trace_enable_tpoint_group", 00:04:16.513 "trace_clear_tpoint_mask", 00:04:16.513 "trace_set_tpoint_mask", 00:04:16.513 "notify_get_notifications", 00:04:16.513 "notify_get_types", 00:04:16.513 "spdk_get_version", 00:04:16.513 "rpc_get_methods" 00:04:16.513 ] 00:04:16.513 11:39:24 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:16.513 11:39:24 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:16.513 11:39:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:16.513 11:39:24 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:16.513 11:39:24 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 4005471 00:04:16.513 11:39:24 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 4005471 ']' 00:04:16.513 11:39:24 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 4005471 00:04:16.513 11:39:24 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:16.513 11:39:24 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.513 11:39:24 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4005471 00:04:16.513 11:39:24 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:16.513 11:39:24 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:16.513 11:39:24 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4005471' 00:04:16.513 killing process with pid 4005471 00:04:16.513 11:39:24 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 4005471 00:04:16.513 11:39:24 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 4005471 00:04:16.774 00:04:16.774 real 0m1.518s 00:04:16.774 user 0m2.727s 00:04:16.774 sys 0m0.480s 00:04:16.774 11:39:24 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.774 11:39:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:16.774 ************************************ 00:04:16.774 END TEST spdkcli_tcp 00:04:16.774 ************************************ 00:04:16.774 11:39:24 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:16.774 11:39:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.774 11:39:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.774 11:39:24 -- common/autotest_common.sh@10 -- # set +x 00:04:16.774 ************************************ 00:04:16.774 START TEST dpdk_mem_utility 00:04:16.774 ************************************ 00:04:16.774 11:39:24 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:17.034 * Looking for test storage... 00:04:17.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:17.034 11:39:24 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:17.034 11:39:24 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:17.034 11:39:24 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:17.034 11:39:24 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.034 11:39:24 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:17.034 11:39:24 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.034 11:39:24 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:17.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.034 --rc genhtml_branch_coverage=1 00:04:17.034 --rc genhtml_function_coverage=1 00:04:17.034 --rc genhtml_legend=1 00:04:17.034 --rc geninfo_all_blocks=1 00:04:17.034 --rc geninfo_unexecuted_blocks=1 00:04:17.034 00:04:17.034 ' 00:04:17.034 11:39:24 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:17.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.034 --rc genhtml_branch_coverage=1 00:04:17.034 --rc genhtml_function_coverage=1 00:04:17.034 --rc genhtml_legend=1 00:04:17.034 --rc geninfo_all_blocks=1 00:04:17.034 --rc geninfo_unexecuted_blocks=1 00:04:17.034 00:04:17.034 ' 00:04:17.034 11:39:24 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:17.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.034 --rc genhtml_branch_coverage=1 00:04:17.034 --rc genhtml_function_coverage=1 00:04:17.034 --rc genhtml_legend=1 00:04:17.034 --rc geninfo_all_blocks=1 00:04:17.034 --rc geninfo_unexecuted_blocks=1 00:04:17.035 00:04:17.035 ' 00:04:17.035 11:39:24 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:17.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.035 --rc genhtml_branch_coverage=1 00:04:17.035 --rc genhtml_function_coverage=1 00:04:17.035 --rc genhtml_legend=1 00:04:17.035 --rc geninfo_all_blocks=1 00:04:17.035 --rc geninfo_unexecuted_blocks=1 00:04:17.035 00:04:17.035 ' 00:04:17.035 11:39:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:17.035 11:39:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4005882 00:04:17.035 11:39:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4005882 00:04:17.035 11:39:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.035 11:39:24 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 4005882 ']' 00:04:17.035 11:39:24 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.035 11:39:24 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.035 11:39:24 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.035 11:39:24 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.035 11:39:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:17.035 [2024-12-09 11:39:24.879461] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:17.035 [2024-12-09 11:39:24.879515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4005882 ] 00:04:17.303 [2024-12-09 11:39:24.964596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.303 [2024-12-09 11:39:24.995319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.875 11:39:25 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:17.875 11:39:25 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:17.875 11:39:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:17.875 11:39:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:17.875 11:39:25 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.875 11:39:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:17.875 { 00:04:17.875 "filename": "/tmp/spdk_mem_dump.txt" 00:04:17.875 } 00:04:17.875 11:39:25 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.875 11:39:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:17.875 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:17.875 1 heaps totaling size 818.000000 MiB 00:04:17.875 size: 818.000000 MiB heap id: 0 00:04:17.875 end heaps---------- 00:04:17.875 9 mempools totaling size 603.782043 MiB 00:04:17.875 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:17.875 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:17.875 size: 100.555481 MiB name: bdev_io_4005882 00:04:17.875 size: 50.003479 MiB name: msgpool_4005882 00:04:17.875 size: 36.509338 MiB name: fsdev_io_4005882 00:04:17.875 size: 21.763794 MiB name: PDU_Pool 00:04:17.875 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:17.875 size: 4.133484 MiB name: evtpool_4005882 00:04:17.875 size: 0.026123 MiB name: Session_Pool 00:04:17.875 end mempools------- 00:04:17.875 6 memzones totaling size 4.142822 MiB 00:04:17.875 size: 1.000366 MiB name: RG_ring_0_4005882 00:04:17.875 size: 1.000366 MiB name: RG_ring_1_4005882 00:04:17.875 size: 1.000366 MiB name: RG_ring_4_4005882 00:04:17.875 size: 1.000366 MiB name: RG_ring_5_4005882 00:04:17.875 size: 0.125366 MiB name: RG_ring_2_4005882 00:04:17.875 size: 0.015991 MiB name: RG_ring_3_4005882 00:04:17.875 end memzones------- 00:04:17.875 11:39:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:17.875 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:17.875 list of free elements. size: 10.852478 MiB 00:04:17.875 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:17.875 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:17.875 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:17.875 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:17.875 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:17.875 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:17.875 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:17.875 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:17.875 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:17.875 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:17.875 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:17.875 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:17.875 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:17.875 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:17.875 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:17.875 list of standard malloc elements. size: 199.218628 MiB 00:04:17.875 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:17.875 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:17.875 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:17.875 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:17.876 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:17.876 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:17.876 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:17.876 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:17.876 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:17.876 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:17.876 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:17.876 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:17.876 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:17.876 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:17.876 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:17.876 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:17.876 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:17.876 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:17.876 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:17.876 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:17.876 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:17.876 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:17.876 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:17.876 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:17.876 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:17.876 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:17.876 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:17.876 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:17.876 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:17.876 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:17.876 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:17.876 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:17.876 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:17.876 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:17.876 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:17.876 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:17.876 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:17.876 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:17.876 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:17.876 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:17.876 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:17.876 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:17.876 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:17.876 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:17.876 list of memzone associated elements. size: 607.928894 MiB 00:04:17.876 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:17.876 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:17.876 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:17.876 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:17.876 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:17.876 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_4005882_0 00:04:17.876 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:17.876 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4005882_0 00:04:17.876 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:17.876 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_4005882_0 00:04:17.876 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:17.876 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:17.876 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:17.876 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:17.876 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:17.876 associated memzone info: size: 3.000122 MiB name: MP_evtpool_4005882_0 00:04:17.876 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:17.876 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4005882 00:04:17.876 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:17.876 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4005882 00:04:17.876 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:17.876 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:17.876 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:17.876 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:17.876 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:17.876 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:17.876 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:17.876 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:17.876 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:17.876 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4005882 00:04:17.876 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:17.876 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4005882 00:04:17.876 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:17.876 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4005882 00:04:17.876 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:17.876 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4005882 00:04:17.876 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:17.876 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_4005882 00:04:17.876 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:17.876 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4005882 00:04:17.876 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:17.876 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:17.876 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:17.876 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:17.876 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:17.876 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:17.876 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:17.876 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_4005882 00:04:17.876 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:17.876 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4005882 00:04:17.876 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:17.876 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:17.876 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:17.876 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:17.876 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:17.876 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4005882 00:04:17.876 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:17.876 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:17.876 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:17.876 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4005882 00:04:17.876 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:17.876 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_4005882 00:04:17.876 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:17.876 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4005882 00:04:17.876 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:17.876 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:18.136 11:39:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:18.136 11:39:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4005882 00:04:18.136 11:39:25 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 4005882 ']' 00:04:18.136 11:39:25 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 4005882 00:04:18.136 11:39:25 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:18.136 11:39:25 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.136 11:39:25 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4005882 00:04:18.136 11:39:25 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:18.136 11:39:25 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:18.136 11:39:25 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4005882' 00:04:18.136 killing process with pid 4005882 00:04:18.136 11:39:25 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 4005882 00:04:18.137 11:39:25 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 4005882 00:04:18.137 00:04:18.137 real 0m1.383s 00:04:18.137 user 0m1.473s 00:04:18.137 sys 0m0.392s 00:04:18.137 11:39:26 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.137 11:39:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:18.137 ************************************ 00:04:18.137 END TEST dpdk_mem_utility 00:04:18.137 ************************************ 00:04:18.398 11:39:26 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:18.398 11:39:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.398 11:39:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.398 11:39:26 -- common/autotest_common.sh@10 -- # set +x 00:04:18.398 ************************************ 00:04:18.398 START TEST event 00:04:18.398 ************************************ 00:04:18.398 11:39:26 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:18.398 * Looking for test storage... 00:04:18.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:18.398 11:39:26 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:18.398 11:39:26 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:18.398 11:39:26 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:18.398 11:39:26 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:18.398 11:39:26 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.398 11:39:26 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.398 11:39:26 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.398 11:39:26 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.398 11:39:26 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.398 11:39:26 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.398 11:39:26 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.398 11:39:26 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.398 11:39:26 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.398 11:39:26 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.398 11:39:26 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.398 11:39:26 event -- scripts/common.sh@344 -- # case "$op" in 00:04:18.398 11:39:26 event -- scripts/common.sh@345 -- # : 1 00:04:18.398 11:39:26 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.398 11:39:26 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.398 11:39:26 event -- scripts/common.sh@365 -- # decimal 1 00:04:18.398 11:39:26 event -- scripts/common.sh@353 -- # local d=1 00:04:18.398 11:39:26 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.398 11:39:26 event -- scripts/common.sh@355 -- # echo 1 00:04:18.398 11:39:26 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.398 11:39:26 event -- scripts/common.sh@366 -- # decimal 2 00:04:18.398 11:39:26 event -- scripts/common.sh@353 -- # local d=2 00:04:18.398 11:39:26 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.398 11:39:26 event -- scripts/common.sh@355 -- # echo 2 00:04:18.658 11:39:26 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.658 11:39:26 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.658 11:39:26 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.658 11:39:26 event -- scripts/common.sh@368 -- # return 0 00:04:18.658 11:39:26 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.658 11:39:26 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:18.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.658 --rc genhtml_branch_coverage=1 00:04:18.658 --rc genhtml_function_coverage=1 00:04:18.658 --rc genhtml_legend=1 00:04:18.658 --rc geninfo_all_blocks=1 00:04:18.658 --rc geninfo_unexecuted_blocks=1 00:04:18.658 00:04:18.658 ' 00:04:18.658 11:39:26 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:18.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.658 --rc genhtml_branch_coverage=1 00:04:18.658 --rc genhtml_function_coverage=1 00:04:18.658 --rc genhtml_legend=1 00:04:18.658 --rc geninfo_all_blocks=1 00:04:18.658 --rc geninfo_unexecuted_blocks=1 00:04:18.658 00:04:18.658 ' 00:04:18.658 11:39:26 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:18.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.658 --rc genhtml_branch_coverage=1 00:04:18.658 --rc genhtml_function_coverage=1 00:04:18.658 --rc genhtml_legend=1 00:04:18.658 --rc geninfo_all_blocks=1 00:04:18.658 --rc geninfo_unexecuted_blocks=1 00:04:18.658 00:04:18.658 ' 00:04:18.658 11:39:26 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:18.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.658 --rc genhtml_branch_coverage=1 00:04:18.658 --rc genhtml_function_coverage=1 00:04:18.658 --rc genhtml_legend=1 00:04:18.658 --rc geninfo_all_blocks=1 00:04:18.658 --rc geninfo_unexecuted_blocks=1 00:04:18.658 00:04:18.658 ' 00:04:18.658 11:39:26 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:18.658 11:39:26 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:18.658 11:39:26 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:18.658 11:39:26 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:18.658 11:39:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.658 11:39:26 event -- common/autotest_common.sh@10 -- # set +x 00:04:18.658 ************************************ 00:04:18.658 START TEST event_perf 00:04:18.658 ************************************ 00:04:18.658 11:39:26 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:18.658 Running I/O for 1 seconds...[2024-12-09 11:39:26.348756] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:18.658 [2024-12-09 11:39:26.348867] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4006286 ] 00:04:18.658 [2024-12-09 11:39:26.439902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:18.658 [2024-12-09 11:39:26.483288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.658 [2024-12-09 11:39:26.483413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:18.658 [2024-12-09 11:39:26.483575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.658 Running I/O for 1 seconds...[2024-12-09 11:39:26.483576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:20.083 00:04:20.083 lcore 0: 186133 00:04:20.083 lcore 1: 186135 00:04:20.083 lcore 2: 186134 00:04:20.083 lcore 3: 186136 00:04:20.083 done. 00:04:20.083 00:04:20.083 real 0m1.186s 00:04:20.083 user 0m4.091s 00:04:20.083 sys 0m0.092s 00:04:20.083 11:39:27 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.083 11:39:27 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:20.083 ************************************ 00:04:20.083 END TEST event_perf 00:04:20.083 ************************************ 00:04:20.083 11:39:27 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:20.083 11:39:27 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:20.083 11:39:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.083 11:39:27 event -- common/autotest_common.sh@10 -- # set +x 00:04:20.083 ************************************ 00:04:20.083 START TEST event_reactor 00:04:20.083 ************************************ 00:04:20.083 11:39:27 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:20.083 [2024-12-09 11:39:27.611006] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:20.083 [2024-12-09 11:39:27.611109] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4006618 ] 00:04:20.083 [2024-12-09 11:39:27.697316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.084 [2024-12-09 11:39:27.733606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.104 test_start 00:04:21.104 oneshot 00:04:21.104 tick 100 00:04:21.104 tick 100 00:04:21.104 tick 250 00:04:21.104 tick 100 00:04:21.104 tick 100 00:04:21.104 tick 250 00:04:21.104 tick 100 00:04:21.104 tick 500 00:04:21.104 tick 100 00:04:21.104 tick 100 00:04:21.104 tick 250 00:04:21.104 tick 100 00:04:21.104 tick 100 00:04:21.104 test_end 00:04:21.104 00:04:21.104 real 0m1.170s 00:04:21.104 user 0m1.086s 00:04:21.104 sys 0m0.081s 00:04:21.104 11:39:28 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.104 11:39:28 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:21.104 ************************************ 00:04:21.104 END TEST event_reactor 00:04:21.104 ************************************ 00:04:21.104 11:39:28 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:21.104 11:39:28 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:21.104 11:39:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.104 11:39:28 event -- common/autotest_common.sh@10 -- # set +x 00:04:21.104 ************************************ 00:04:21.104 START TEST event_reactor_perf 00:04:21.104 ************************************ 00:04:21.104 11:39:28 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:21.104 [2024-12-09 11:39:28.861876] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:21.104 [2024-12-09 11:39:28.861976] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4006760 ] 00:04:21.104 [2024-12-09 11:39:28.951974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.416 [2024-12-09 11:39:28.990061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.442 test_start 00:04:22.442 test_end 00:04:22.442 Performance: 539194 events per second 00:04:22.442 00:04:22.442 real 0m1.178s 00:04:22.442 user 0m1.092s 00:04:22.442 sys 0m0.082s 00:04:22.442 11:39:30 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.442 11:39:30 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:22.442 ************************************ 00:04:22.442 END TEST event_reactor_perf 00:04:22.442 ************************************ 00:04:22.442 11:39:30 event -- event/event.sh@49 -- # uname -s 00:04:22.442 11:39:30 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:22.442 11:39:30 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:22.442 11:39:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.442 11:39:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.442 11:39:30 event -- common/autotest_common.sh@10 -- # set +x 00:04:22.442 ************************************ 00:04:22.442 START TEST event_scheduler 00:04:22.442 ************************************ 00:04:22.442 11:39:30 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:22.442 * Looking for test storage... 00:04:22.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:22.442 11:39:30 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:22.442 11:39:30 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:22.442 11:39:30 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:22.442 11:39:30 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.442 11:39:30 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:22.442 11:39:30 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.442 11:39:30 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:22.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.442 --rc genhtml_branch_coverage=1 00:04:22.442 --rc genhtml_function_coverage=1 00:04:22.442 --rc genhtml_legend=1 00:04:22.442 --rc geninfo_all_blocks=1 00:04:22.442 --rc geninfo_unexecuted_blocks=1 00:04:22.442 00:04:22.442 ' 00:04:22.442 11:39:30 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:22.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.442 --rc genhtml_branch_coverage=1 00:04:22.442 --rc genhtml_function_coverage=1 00:04:22.442 --rc genhtml_legend=1 00:04:22.442 --rc geninfo_all_blocks=1 00:04:22.442 --rc geninfo_unexecuted_blocks=1 00:04:22.442 00:04:22.442 ' 00:04:22.442 11:39:30 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:22.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.442 --rc genhtml_branch_coverage=1 00:04:22.442 --rc genhtml_function_coverage=1 00:04:22.442 --rc genhtml_legend=1 00:04:22.442 --rc geninfo_all_blocks=1 00:04:22.442 --rc geninfo_unexecuted_blocks=1 00:04:22.442 00:04:22.442 ' 00:04:22.442 11:39:30 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:22.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.442 --rc genhtml_branch_coverage=1 00:04:22.442 --rc genhtml_function_coverage=1 00:04:22.442 --rc genhtml_legend=1 00:04:22.442 --rc geninfo_all_blocks=1 00:04:22.442 --rc geninfo_unexecuted_blocks=1 00:04:22.442 00:04:22.442 ' 00:04:22.442 11:39:30 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:22.442 11:39:30 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=4007082 00:04:22.442 11:39:30 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.442 11:39:30 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 4007082 00:04:22.442 11:39:30 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:22.442 11:39:30 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 4007082 ']' 00:04:22.442 11:39:30 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.442 11:39:30 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.442 11:39:30 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.442 11:39:30 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.442 11:39:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:22.761 [2024-12-09 11:39:30.347440] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:22.761 [2024-12-09 11:39:30.347514] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4007082 ] 00:04:22.761 [2024-12-09 11:39:30.414157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:22.761 [2024-12-09 11:39:30.455680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.761 [2024-12-09 11:39:30.455801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:22.761 [2024-12-09 11:39:30.455957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:22.761 [2024-12-09 11:39:30.455958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:22.761 11:39:30 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.761 11:39:30 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:22.761 11:39:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:22.761 11:39:30 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.761 11:39:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:22.761 [2024-12-09 11:39:30.504611] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:22.761 [2024-12-09 11:39:30.504625] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:22.761 [2024-12-09 11:39:30.504633] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:22.761 [2024-12-09 11:39:30.504641] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:22.761 [2024-12-09 11:39:30.504646] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:22.761 11:39:30 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.761 11:39:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:22.761 11:39:30 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.761 11:39:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:22.761 [2024-12-09 11:39:30.563230] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:22.761 11:39:30 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.761 11:39:30 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:22.761 11:39:30 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.761 11:39:30 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.761 11:39:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:22.761 ************************************ 00:04:22.761 START TEST scheduler_create_thread 00:04:22.761 ************************************ 00:04:22.761 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:22.761 11:39:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:22.761 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.761 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.761 2 00:04:22.761 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.761 11:39:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:22.761 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.761 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.761 3 00:04:22.761 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.761 11:39:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:22.761 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.761 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.022 4 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.022 5 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.022 6 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.022 7 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.022 8 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.022 9 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.022 10 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:23.022 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.023 11:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.964 11:39:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.964 11:39:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:23.964 11:39:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.964 11:39:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.350 11:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.350 11:39:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:25.350 11:39:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:25.350 11:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.350 11:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.294 11:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.294 00:04:26.294 real 0m3.383s 00:04:26.294 user 0m0.026s 00:04:26.294 sys 0m0.005s 00:04:26.294 11:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.294 11:39:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.294 ************************************ 00:04:26.294 END TEST scheduler_create_thread 00:04:26.294 ************************************ 00:04:26.294 11:39:34 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:26.294 11:39:34 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 4007082 00:04:26.294 11:39:34 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 4007082 ']' 00:04:26.294 11:39:34 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 4007082 00:04:26.294 11:39:34 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:26.294 11:39:34 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:26.294 11:39:34 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4007082 00:04:26.294 11:39:34 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:26.294 11:39:34 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:26.294 11:39:34 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4007082' 00:04:26.294 killing process with pid 4007082 00:04:26.294 11:39:34 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 4007082 00:04:26.294 11:39:34 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 4007082 00:04:26.555 [2024-12-09 11:39:34.366423] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:26.816 00:04:26.816 real 0m4.416s 00:04:26.816 user 0m7.627s 00:04:26.816 sys 0m0.384s 00:04:26.816 11:39:34 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.816 11:39:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:26.816 ************************************ 00:04:26.816 END TEST event_scheduler 00:04:26.816 ************************************ 00:04:26.816 11:39:34 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:26.816 11:39:34 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:26.816 11:39:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.816 11:39:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.816 11:39:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:26.816 ************************************ 00:04:26.816 START TEST app_repeat 00:04:26.816 ************************************ 00:04:26.816 11:39:34 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:26.816 11:39:34 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.816 11:39:34 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.816 11:39:34 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:26.816 11:39:34 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:26.816 11:39:34 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:26.816 11:39:34 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:26.816 11:39:34 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:26.816 11:39:34 event.app_repeat -- event/event.sh@19 -- # repeat_pid=4008132 00:04:26.816 11:39:34 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.816 11:39:34 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:26.816 11:39:34 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4008132' 00:04:26.816 Process app_repeat pid: 4008132 00:04:26.816 11:39:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:26.816 11:39:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:26.816 spdk_app_start Round 0 00:04:26.816 11:39:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4008132 /var/tmp/spdk-nbd.sock 00:04:26.816 11:39:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4008132 ']' 00:04:26.816 11:39:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:26.816 11:39:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.816 11:39:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:26.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:26.816 11:39:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.816 11:39:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:26.816 [2024-12-09 11:39:34.639113] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:26.816 [2024-12-09 11:39:34.639176] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4008132 ] 00:04:27.078 [2024-12-09 11:39:34.723327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:27.078 [2024-12-09 11:39:34.757455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.078 [2024-12-09 11:39:34.757455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.078 11:39:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.078 11:39:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:27.078 11:39:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:27.339 Malloc0 00:04:27.339 11:39:34 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:27.339 Malloc1 00:04:27.339 11:39:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:27.339 11:39:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.339 11:39:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:27.339 11:39:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:27.339 11:39:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.339 11:39:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:27.339 11:39:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:27.339 11:39:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.339 11:39:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:27.339 11:39:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:27.339 11:39:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.339 11:39:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:27.339 11:39:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:27.339 11:39:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:27.339 11:39:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:27.339 11:39:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:27.599 /dev/nbd0 00:04:27.599 11:39:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:27.599 11:39:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:27.599 11:39:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:27.599 11:39:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:27.599 11:39:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:27.599 11:39:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:27.599 11:39:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:27.599 11:39:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:27.599 11:39:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:27.599 11:39:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:27.599 11:39:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:27.599 1+0 records in 00:04:27.599 1+0 records out 00:04:27.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285133 s, 14.4 MB/s 00:04:27.599 11:39:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:27.599 11:39:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:27.599 11:39:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:27.599 11:39:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:27.599 11:39:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:27.599 11:39:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:27.599 11:39:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:27.599 11:39:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:27.860 /dev/nbd1 00:04:27.860 11:39:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:27.860 11:39:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:27.861 11:39:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:27.861 11:39:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:27.861 11:39:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:27.861 11:39:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:27.861 11:39:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:27.861 11:39:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:27.861 11:39:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:27.861 11:39:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:27.861 11:39:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:27.861 1+0 records in 00:04:27.861 1+0 records out 00:04:27.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270852 s, 15.1 MB/s 00:04:27.861 11:39:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:27.861 11:39:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:27.861 11:39:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:27.861 11:39:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:27.861 11:39:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:27.861 11:39:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:27.861 11:39:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:27.861 11:39:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:27.861 11:39:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.861 11:39:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:28.122 11:39:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:28.122 { 00:04:28.122 "nbd_device": "/dev/nbd0", 00:04:28.122 "bdev_name": "Malloc0" 00:04:28.122 }, 00:04:28.122 { 00:04:28.122 "nbd_device": "/dev/nbd1", 00:04:28.122 "bdev_name": "Malloc1" 00:04:28.122 } 00:04:28.122 ]' 00:04:28.122 11:39:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:28.122 { 00:04:28.122 "nbd_device": "/dev/nbd0", 00:04:28.122 "bdev_name": "Malloc0" 00:04:28.122 }, 00:04:28.122 { 00:04:28.122 "nbd_device": "/dev/nbd1", 00:04:28.122 "bdev_name": "Malloc1" 00:04:28.122 } 00:04:28.122 ]' 00:04:28.122 11:39:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:28.122 11:39:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:28.122 /dev/nbd1' 00:04:28.122 11:39:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:28.122 /dev/nbd1' 00:04:28.122 11:39:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:28.122 11:39:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:28.122 11:39:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:28.122 11:39:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:28.122 11:39:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:28.122 11:39:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:28.122 11:39:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.122 11:39:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:28.122 11:39:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:28.122 11:39:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:28.122 11:39:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:28.122 11:39:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:28.122 256+0 records in 00:04:28.122 256+0 records out 00:04:28.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119602 s, 87.7 MB/s 00:04:28.122 11:39:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:28.122 11:39:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:28.122 256+0 records in 00:04:28.122 256+0 records out 00:04:28.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117468 s, 89.3 MB/s 00:04:28.122 11:39:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:28.122 11:39:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:28.123 256+0 records in 00:04:28.123 256+0 records out 00:04:28.123 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130659 s, 80.3 MB/s 00:04:28.123 11:39:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:28.123 11:39:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.123 11:39:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:28.123 11:39:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:28.123 11:39:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:28.123 11:39:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:28.123 11:39:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:28.123 11:39:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:28.123 11:39:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:28.123 11:39:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:28.123 11:39:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:28.123 11:39:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:28.123 11:39:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:28.123 11:39:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.123 11:39:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.123 11:39:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:28.123 11:39:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:28.123 11:39:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:28.123 11:39:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:28.383 11:39:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:28.383 11:39:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:28.383 11:39:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:28.383 11:39:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:28.383 11:39:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:28.383 11:39:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:28.383 11:39:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:28.383 11:39:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:28.383 11:39:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:28.383 11:39:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:28.644 11:39:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:28.644 11:39:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:28.644 11:39:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:28.644 11:39:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:28.644 11:39:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:28.644 11:39:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:28.644 11:39:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:28.644 11:39:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:28.644 11:39:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:28.644 11:39:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.644 11:39:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:28.644 11:39:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:28.644 11:39:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:28.644 11:39:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:28.904 11:39:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:28.904 11:39:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:28.904 11:39:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:28.904 11:39:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:28.904 11:39:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:28.904 11:39:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:28.904 11:39:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:28.904 11:39:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:28.904 11:39:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:28.904 11:39:36 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:28.904 11:39:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:29.163 [2024-12-09 11:39:36.807852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:29.163 [2024-12-09 11:39:36.837016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.163 [2024-12-09 11:39:36.837016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.163 [2024-12-09 11:39:36.865959] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:29.163 [2024-12-09 11:39:36.865993] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:32.466 11:39:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:32.466 11:39:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:32.466 spdk_app_start Round 1 00:04:32.466 11:39:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4008132 /var/tmp/spdk-nbd.sock 00:04:32.466 11:39:39 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4008132 ']' 00:04:32.466 11:39:39 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:32.466 11:39:39 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.466 11:39:39 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:32.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:32.466 11:39:39 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.466 11:39:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:32.466 11:39:39 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.466 11:39:39 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:32.466 11:39:39 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:32.466 Malloc0 00:04:32.466 11:39:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:32.466 Malloc1 00:04:32.466 11:39:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:32.466 11:39:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.466 11:39:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:32.466 11:39:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:32.466 11:39:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.466 11:39:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:32.466 11:39:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:32.466 11:39:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.466 11:39:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:32.466 11:39:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:32.466 11:39:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.466 11:39:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:32.466 11:39:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:32.466 11:39:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:32.466 11:39:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:32.466 11:39:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:32.728 /dev/nbd0 00:04:32.728 11:39:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:32.728 11:39:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:32.728 11:39:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:32.728 11:39:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:32.728 11:39:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:32.728 11:39:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:32.728 11:39:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:32.728 11:39:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:32.728 11:39:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:32.728 11:39:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:32.728 11:39:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:32.728 1+0 records in 00:04:32.728 1+0 records out 00:04:32.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271376 s, 15.1 MB/s 00:04:32.728 11:39:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.728 11:39:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:32.728 11:39:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.728 11:39:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:32.728 11:39:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:32.728 11:39:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:32.728 11:39:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:32.728 11:39:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:32.988 /dev/nbd1 00:04:32.989 11:39:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:32.989 11:39:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:32.989 11:39:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:32.989 11:39:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:32.989 11:39:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:32.989 11:39:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:32.989 11:39:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:32.989 11:39:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:32.989 11:39:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:32.989 11:39:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:32.989 11:39:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:32.989 1+0 records in 00:04:32.989 1+0 records out 00:04:32.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245542 s, 16.7 MB/s 00:04:32.989 11:39:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.989 11:39:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:32.989 11:39:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.989 11:39:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:32.989 11:39:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:32.989 11:39:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:32.989 11:39:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:32.989 11:39:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:32.989 11:39:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.989 11:39:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:32.989 11:39:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:32.989 { 00:04:32.989 "nbd_device": "/dev/nbd0", 00:04:32.989 "bdev_name": "Malloc0" 00:04:32.989 }, 00:04:32.989 { 00:04:32.989 "nbd_device": "/dev/nbd1", 00:04:32.989 "bdev_name": "Malloc1" 00:04:32.989 } 00:04:32.989 ]' 00:04:32.989 11:39:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:32.989 { 00:04:32.989 "nbd_device": "/dev/nbd0", 00:04:32.989 "bdev_name": "Malloc0" 00:04:32.989 }, 00:04:32.989 { 00:04:32.989 "nbd_device": "/dev/nbd1", 00:04:32.989 "bdev_name": "Malloc1" 00:04:32.989 } 00:04:32.989 ]' 00:04:32.989 11:39:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:33.250 /dev/nbd1' 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:33.250 /dev/nbd1' 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:33.250 256+0 records in 00:04:33.250 256+0 records out 00:04:33.250 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125666 s, 83.4 MB/s 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:33.250 256+0 records in 00:04:33.250 256+0 records out 00:04:33.250 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123506 s, 84.9 MB/s 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:33.250 256+0 records in 00:04:33.250 256+0 records out 00:04:33.250 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0305236 s, 34.4 MB/s 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:33.250 11:39:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:33.510 11:39:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:33.510 11:39:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:33.510 11:39:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:33.510 11:39:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:33.510 11:39:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:33.510 11:39:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:33.510 11:39:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:33.510 11:39:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:33.510 11:39:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:33.510 11:39:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:33.510 11:39:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:33.510 11:39:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:33.510 11:39:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:33.510 11:39:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:33.510 11:39:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:33.510 11:39:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:33.510 11:39:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:33.510 11:39:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:33.510 11:39:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:33.510 11:39:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.511 11:39:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:33.771 11:39:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:33.771 11:39:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:33.771 11:39:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:33.771 11:39:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:33.771 11:39:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:33.771 11:39:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:33.771 11:39:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:33.771 11:39:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:33.771 11:39:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:33.771 11:39:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:33.771 11:39:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:33.771 11:39:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:33.771 11:39:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:34.032 11:39:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:34.032 [2024-12-09 11:39:41.855331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:34.032 [2024-12-09 11:39:41.884566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.032 [2024-12-09 11:39:41.884566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.032 [2024-12-09 11:39:41.914010] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:34.032 [2024-12-09 11:39:41.914042] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:37.344 11:39:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:37.344 11:39:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:37.344 spdk_app_start Round 2 00:04:37.344 11:39:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4008132 /var/tmp/spdk-nbd.sock 00:04:37.344 11:39:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4008132 ']' 00:04:37.344 11:39:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:37.344 11:39:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.344 11:39:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:37.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:37.344 11:39:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.344 11:39:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:37.344 11:39:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.344 11:39:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:37.344 11:39:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:37.344 Malloc0 00:04:37.344 11:39:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:37.607 Malloc1 00:04:37.607 11:39:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:37.607 11:39:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.607 11:39:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:37.607 11:39:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:37.607 11:39:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.607 11:39:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:37.607 11:39:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:37.607 11:39:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.607 11:39:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:37.607 11:39:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:37.607 11:39:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.607 11:39:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:37.607 11:39:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:37.607 11:39:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:37.607 11:39:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.607 11:39:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:37.607 /dev/nbd0 00:04:37.869 11:39:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:37.869 11:39:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:37.869 1+0 records in 00:04:37.869 1+0 records out 00:04:37.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276363 s, 14.8 MB/s 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:37.869 11:39:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:37.869 11:39:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.869 11:39:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:37.869 /dev/nbd1 00:04:37.869 11:39:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:37.869 11:39:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:37.869 1+0 records in 00:04:37.869 1+0 records out 00:04:37.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305985 s, 13.4 MB/s 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:37.869 11:39:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:37.869 11:39:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:37.869 11:39:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.869 11:39:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:37.869 11:39:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.869 11:39:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:38.131 11:39:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:38.131 { 00:04:38.131 "nbd_device": "/dev/nbd0", 00:04:38.131 "bdev_name": "Malloc0" 00:04:38.131 }, 00:04:38.131 { 00:04:38.131 "nbd_device": "/dev/nbd1", 00:04:38.131 "bdev_name": "Malloc1" 00:04:38.131 } 00:04:38.131 ]' 00:04:38.131 11:39:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:38.131 { 00:04:38.131 "nbd_device": "/dev/nbd0", 00:04:38.131 "bdev_name": "Malloc0" 00:04:38.131 }, 00:04:38.131 { 00:04:38.131 "nbd_device": "/dev/nbd1", 00:04:38.132 "bdev_name": "Malloc1" 00:04:38.132 } 00:04:38.132 ]' 00:04:38.132 11:39:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:38.132 11:39:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:38.132 /dev/nbd1' 00:04:38.132 11:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:38.132 /dev/nbd1' 00:04:38.132 11:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:38.132 11:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:38.132 11:39:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:38.132 11:39:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:38.132 11:39:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:38.132 11:39:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:38.132 11:39:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.132 11:39:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:38.132 11:39:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:38.132 11:39:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:38.132 11:39:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:38.132 11:39:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:38.132 256+0 records in 00:04:38.132 256+0 records out 00:04:38.132 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127545 s, 82.2 MB/s 00:04:38.132 11:39:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:38.132 11:39:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:38.132 256+0 records in 00:04:38.132 256+0 records out 00:04:38.132 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116052 s, 90.4 MB/s 00:04:38.132 11:39:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:38.132 11:39:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:38.393 256+0 records in 00:04:38.393 256+0 records out 00:04:38.393 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124146 s, 84.5 MB/s 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:38.393 11:39:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:38.655 11:39:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:38.655 11:39:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:38.655 11:39:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:38.655 11:39:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:38.655 11:39:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:38.655 11:39:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:38.655 11:39:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:38.655 11:39:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:38.655 11:39:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:38.655 11:39:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.655 11:39:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:38.916 11:39:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:38.916 11:39:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:38.916 11:39:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:38.916 11:39:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:38.916 11:39:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:38.916 11:39:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:38.916 11:39:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:38.916 11:39:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:38.916 11:39:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:38.916 11:39:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:38.916 11:39:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:38.916 11:39:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:38.916 11:39:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:39.177 11:39:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:39.177 [2024-12-09 11:39:46.921600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:39.177 [2024-12-09 11:39:46.950473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.177 [2024-12-09 11:39:46.950473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.177 [2024-12-09 11:39:46.979398] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:39.177 [2024-12-09 11:39:46.979430] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:42.479 11:39:49 event.app_repeat -- event/event.sh@38 -- # waitforlisten 4008132 /var/tmp/spdk-nbd.sock 00:04:42.479 11:39:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4008132 ']' 00:04:42.479 11:39:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:42.479 11:39:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.479 11:39:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:42.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:42.479 11:39:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.479 11:39:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:42.479 11:39:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.479 11:39:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:42.479 11:39:50 event.app_repeat -- event/event.sh@39 -- # killprocess 4008132 00:04:42.479 11:39:50 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 4008132 ']' 00:04:42.479 11:39:50 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 4008132 00:04:42.479 11:39:50 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:42.479 11:39:50 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.479 11:39:50 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4008132 00:04:42.479 11:39:50 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.479 11:39:50 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.479 11:39:50 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4008132' 00:04:42.479 killing process with pid 4008132 00:04:42.479 11:39:50 event.app_repeat -- common/autotest_common.sh@973 -- # kill 4008132 00:04:42.479 11:39:50 event.app_repeat -- common/autotest_common.sh@978 -- # wait 4008132 00:04:42.479 spdk_app_start is called in Round 0. 00:04:42.479 Shutdown signal received, stop current app iteration 00:04:42.479 Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 reinitialization... 00:04:42.479 spdk_app_start is called in Round 1. 00:04:42.479 Shutdown signal received, stop current app iteration 00:04:42.479 Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 reinitialization... 00:04:42.479 spdk_app_start is called in Round 2. 00:04:42.479 Shutdown signal received, stop current app iteration 00:04:42.479 Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 reinitialization... 00:04:42.479 spdk_app_start is called in Round 3. 00:04:42.480 Shutdown signal received, stop current app iteration 00:04:42.480 11:39:50 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:42.480 11:39:50 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:42.480 00:04:42.480 real 0m15.551s 00:04:42.480 user 0m33.979s 00:04:42.480 sys 0m2.245s 00:04:42.480 11:39:50 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.480 11:39:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:42.480 ************************************ 00:04:42.480 END TEST app_repeat 00:04:42.480 ************************************ 00:04:42.480 11:39:50 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:42.480 11:39:50 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:42.480 11:39:50 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.480 11:39:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.480 11:39:50 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.480 ************************************ 00:04:42.480 START TEST cpu_locks 00:04:42.480 ************************************ 00:04:42.480 11:39:50 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:42.480 * Looking for test storage... 00:04:42.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:42.480 11:39:50 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:42.480 11:39:50 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:04:42.480 11:39:50 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:42.753 11:39:50 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:42.753 11:39:50 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.753 11:39:50 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.753 11:39:50 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.753 11:39:50 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.753 11:39:50 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.753 11:39:50 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.753 11:39:50 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.753 11:39:50 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.753 11:39:50 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.753 11:39:50 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.753 11:39:50 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.753 11:39:50 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:42.753 11:39:50 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:42.753 11:39:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.753 11:39:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.753 11:39:50 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:42.753 11:39:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:42.753 11:39:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.753 11:39:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:42.753 11:39:50 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.754 11:39:50 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:42.754 11:39:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:42.754 11:39:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.754 11:39:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:42.754 11:39:50 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.754 11:39:50 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.754 11:39:50 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.754 11:39:50 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:42.754 11:39:50 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.754 11:39:50 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:42.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.754 --rc genhtml_branch_coverage=1 00:04:42.754 --rc genhtml_function_coverage=1 00:04:42.754 --rc genhtml_legend=1 00:04:42.754 --rc geninfo_all_blocks=1 00:04:42.754 --rc geninfo_unexecuted_blocks=1 00:04:42.754 00:04:42.754 ' 00:04:42.754 11:39:50 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:42.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.754 --rc genhtml_branch_coverage=1 00:04:42.754 --rc genhtml_function_coverage=1 00:04:42.754 --rc genhtml_legend=1 00:04:42.754 --rc geninfo_all_blocks=1 00:04:42.754 --rc geninfo_unexecuted_blocks=1 00:04:42.754 00:04:42.754 ' 00:04:42.754 11:39:50 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:42.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.754 --rc genhtml_branch_coverage=1 00:04:42.754 --rc genhtml_function_coverage=1 00:04:42.754 --rc genhtml_legend=1 00:04:42.754 --rc geninfo_all_blocks=1 00:04:42.754 --rc geninfo_unexecuted_blocks=1 00:04:42.754 00:04:42.754 ' 00:04:42.754 11:39:50 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:42.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.754 --rc genhtml_branch_coverage=1 00:04:42.754 --rc genhtml_function_coverage=1 00:04:42.754 --rc genhtml_legend=1 00:04:42.754 --rc geninfo_all_blocks=1 00:04:42.754 --rc geninfo_unexecuted_blocks=1 00:04:42.754 00:04:42.754 ' 00:04:42.754 11:39:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:42.754 11:39:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:42.754 11:39:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:42.754 11:39:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:42.754 11:39:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.754 11:39:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.754 11:39:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.754 ************************************ 00:04:42.754 START TEST default_locks 00:04:42.754 ************************************ 00:04:42.754 11:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:42.754 11:39:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=4011398 00:04:42.754 11:39:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 4011398 00:04:42.754 11:39:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.754 11:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 4011398 ']' 00:04:42.754 11:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.754 11:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.754 11:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.754 11:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.754 11:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.754 [2024-12-09 11:39:50.530295] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:42.754 [2024-12-09 11:39:50.530359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4011398 ] 00:04:42.754 [2024-12-09 11:39:50.617781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.023 [2024-12-09 11:39:50.653026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.595 11:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.595 11:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:43.595 11:39:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 4011398 00:04:43.595 11:39:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 4011398 00:04:43.595 11:39:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:44.166 lslocks: write error 00:04:44.166 11:39:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 4011398 00:04:44.166 11:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 4011398 ']' 00:04:44.166 11:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 4011398 00:04:44.166 11:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:44.166 11:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.166 11:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4011398 00:04:44.166 11:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.166 11:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.166 11:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4011398' 00:04:44.166 killing process with pid 4011398 00:04:44.166 11:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 4011398 00:04:44.166 11:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 4011398 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 4011398 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 4011398 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 4011398 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 4011398 ']' 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (4011398) - No such process 00:04:44.428 ERROR: process (pid: 4011398) is no longer running 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:44.428 00:04:44.428 real 0m1.622s 00:04:44.428 user 0m1.734s 00:04:44.428 sys 0m0.582s 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.428 11:39:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.428 ************************************ 00:04:44.428 END TEST default_locks 00:04:44.428 ************************************ 00:04:44.428 11:39:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:44.428 11:39:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.428 11:39:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.428 11:39:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.428 ************************************ 00:04:44.428 START TEST default_locks_via_rpc 00:04:44.428 ************************************ 00:04:44.428 11:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:44.428 11:39:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=4011769 00:04:44.428 11:39:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 4011769 00:04:44.428 11:39:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.428 11:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4011769 ']' 00:04:44.428 11:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.428 11:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.428 11:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.428 11:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.428 11:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.428 [2024-12-09 11:39:52.224148] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:44.428 [2024-12-09 11:39:52.224205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4011769 ] 00:04:44.428 [2024-12-09 11:39:52.308874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.689 [2024-12-09 11:39:52.343773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.263 11:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.263 11:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:45.263 11:39:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:45.263 11:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.263 11:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.263 11:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.263 11:39:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:45.263 11:39:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:45.263 11:39:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:45.263 11:39:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:45.263 11:39:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:45.263 11:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.263 11:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.263 11:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.263 11:39:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 4011769 00:04:45.263 11:39:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 4011769 00:04:45.263 11:39:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:45.834 11:39:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 4011769 00:04:45.834 11:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 4011769 ']' 00:04:45.834 11:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 4011769 00:04:45.834 11:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:45.834 11:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.834 11:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4011769 00:04:45.834 11:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.834 11:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.834 11:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4011769' 00:04:45.834 killing process with pid 4011769 00:04:45.834 11:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 4011769 00:04:45.834 11:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 4011769 00:04:46.095 00:04:46.095 real 0m1.574s 00:04:46.095 user 0m1.694s 00:04:46.096 sys 0m0.545s 00:04:46.096 11:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.096 11:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.096 ************************************ 00:04:46.096 END TEST default_locks_via_rpc 00:04:46.096 ************************************ 00:04:46.096 11:39:53 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:46.096 11:39:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.096 11:39:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.096 11:39:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.096 ************************************ 00:04:46.096 START TEST non_locking_app_on_locked_coremask 00:04:46.096 ************************************ 00:04:46.096 11:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:46.096 11:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=4012132 00:04:46.096 11:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 4012132 /var/tmp/spdk.sock 00:04:46.096 11:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.096 11:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4012132 ']' 00:04:46.096 11:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.096 11:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.096 11:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.096 11:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.096 11:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.096 [2024-12-09 11:39:53.873372] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:46.096 [2024-12-09 11:39:53.873422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012132 ] 00:04:46.096 [2024-12-09 11:39:53.957769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.357 [2024-12-09 11:39:53.988839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.928 11:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.928 11:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:46.928 11:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:46.929 11:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=4012432 00:04:46.929 11:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 4012432 /var/tmp/spdk2.sock 00:04:46.929 11:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4012432 ']' 00:04:46.929 11:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:46.929 11:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.929 11:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:46.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:46.929 11:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.929 11:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.929 [2024-12-09 11:39:54.712353] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:46.929 [2024-12-09 11:39:54.712410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012432 ] 00:04:46.929 [2024-12-09 11:39:54.801704] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:46.929 [2024-12-09 11:39:54.801730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.188 [2024-12-09 11:39:54.864086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.757 11:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.757 11:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:47.757 11:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 4012132 00:04:47.757 11:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4012132 00:04:47.757 11:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:48.018 lslocks: write error 00:04:48.018 11:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 4012132 00:04:48.018 11:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4012132 ']' 00:04:48.018 11:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 4012132 00:04:48.018 11:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:48.018 11:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.018 11:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4012132 00:04:48.018 11:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.018 11:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.018 11:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4012132' 00:04:48.018 killing process with pid 4012132 00:04:48.018 11:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 4012132 00:04:48.018 11:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 4012132 00:04:48.590 11:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 4012432 00:04:48.590 11:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4012432 ']' 00:04:48.590 11:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 4012432 00:04:48.590 11:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:48.590 11:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.590 11:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4012432 00:04:48.590 11:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.590 11:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.590 11:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4012432' 00:04:48.590 killing process with pid 4012432 00:04:48.590 11:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 4012432 00:04:48.590 11:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 4012432 00:04:48.590 00:04:48.590 real 0m2.630s 00:04:48.590 user 0m2.938s 00:04:48.590 sys 0m0.775s 00:04:48.590 11:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.590 11:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.590 ************************************ 00:04:48.590 END TEST non_locking_app_on_locked_coremask 00:04:48.590 ************************************ 00:04:48.852 11:39:56 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:48.852 11:39:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.852 11:39:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.852 11:39:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.852 ************************************ 00:04:48.852 START TEST locking_app_on_unlocked_coremask 00:04:48.852 ************************************ 00:04:48.852 11:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:48.852 11:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=4012835 00:04:48.852 11:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 4012835 /var/tmp/spdk.sock 00:04:48.852 11:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:48.852 11:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4012835 ']' 00:04:48.852 11:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.852 11:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.852 11:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.852 11:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.852 11:39:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.852 [2024-12-09 11:39:56.578087] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:48.852 [2024-12-09 11:39:56.578137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012835 ] 00:04:48.852 [2024-12-09 11:39:56.660032] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:48.852 [2024-12-09 11:39:56.660054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.852 [2024-12-09 11:39:56.690869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.795 11:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.795 11:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:49.795 11:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=4012854 00:04:49.795 11:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 4012854 /var/tmp/spdk2.sock 00:04:49.795 11:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:49.795 11:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4012854 ']' 00:04:49.795 11:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:49.795 11:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.795 11:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:49.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:49.795 11:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.795 11:39:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.795 [2024-12-09 11:39:57.421335] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:49.795 [2024-12-09 11:39:57.421407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012854 ] 00:04:49.795 [2024-12-09 11:39:57.510420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.795 [2024-12-09 11:39:57.568888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.368 11:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.368 11:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:50.368 11:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 4012854 00:04:50.368 11:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4012854 00:04:50.368 11:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:50.940 lslocks: write error 00:04:50.940 11:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 4012835 00:04:50.940 11:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4012835 ']' 00:04:50.940 11:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 4012835 00:04:50.940 11:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:50.940 11:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.940 11:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4012835 00:04:50.940 11:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.940 11:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.940 11:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4012835' 00:04:50.940 killing process with pid 4012835 00:04:50.940 11:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 4012835 00:04:50.940 11:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 4012835 00:04:51.513 11:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 4012854 00:04:51.513 11:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4012854 ']' 00:04:51.513 11:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 4012854 00:04:51.513 11:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:51.513 11:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.513 11:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4012854 00:04:51.513 11:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.513 11:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.513 11:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4012854' 00:04:51.513 killing process with pid 4012854 00:04:51.513 11:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 4012854 00:04:51.513 11:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 4012854 00:04:51.513 00:04:51.513 real 0m2.849s 00:04:51.513 user 0m3.175s 00:04:51.513 sys 0m0.881s 00:04:51.513 11:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.513 11:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.513 ************************************ 00:04:51.513 END TEST locking_app_on_unlocked_coremask 00:04:51.513 ************************************ 00:04:51.774 11:39:59 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:51.774 11:39:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.774 11:39:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.774 11:39:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.774 ************************************ 00:04:51.774 START TEST locking_app_on_locked_coremask 00:04:51.774 ************************************ 00:04:51.774 11:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:51.774 11:39:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=4013364 00:04:51.774 11:39:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 4013364 /var/tmp/spdk.sock 00:04:51.774 11:39:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.774 11:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4013364 ']' 00:04:51.774 11:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.774 11:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.774 11:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.774 11:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.774 11:39:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.774 [2024-12-09 11:39:59.500351] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:51.774 [2024-12-09 11:39:59.500400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4013364 ] 00:04:51.774 [2024-12-09 11:39:59.582971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.774 [2024-12-09 11:39:59.613039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.717 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.717 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:52.717 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:52.717 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=4013560 00:04:52.717 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 4013560 /var/tmp/spdk2.sock 00:04:52.717 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:52.717 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 4013560 /var/tmp/spdk2.sock 00:04:52.717 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:52.717 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.717 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:52.717 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.717 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 4013560 /var/tmp/spdk2.sock 00:04:52.717 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4013560 ']' 00:04:52.717 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:52.717 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.717 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:52.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:52.717 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.717 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.717 [2024-12-09 11:40:00.313997] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:52.717 [2024-12-09 11:40:00.314051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4013560 ] 00:04:52.717 [2024-12-09 11:40:00.397874] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 4013364 has claimed it. 00:04:52.717 [2024-12-09 11:40:00.397905] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:53.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (4013560) - No such process 00:04:53.299 ERROR: process (pid: 4013560) is no longer running 00:04:53.299 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.299 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:53.299 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:53.299 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:53.299 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:53.299 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:53.299 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 4013364 00:04:53.299 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4013364 00:04:53.299 11:40:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:53.560 lslocks: write error 00:04:53.560 11:40:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 4013364 00:04:53.560 11:40:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4013364 ']' 00:04:53.560 11:40:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 4013364 00:04:53.560 11:40:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:53.560 11:40:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.560 11:40:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4013364 00:04:53.822 11:40:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.822 11:40:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.822 11:40:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4013364' 00:04:53.822 killing process with pid 4013364 00:04:53.822 11:40:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 4013364 00:04:53.822 11:40:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 4013364 00:04:53.822 00:04:53.822 real 0m2.221s 00:04:53.822 user 0m2.477s 00:04:53.822 sys 0m0.614s 00:04:53.822 11:40:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.822 11:40:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.822 ************************************ 00:04:53.822 END TEST locking_app_on_locked_coremask 00:04:53.822 ************************************ 00:04:53.822 11:40:01 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:53.822 11:40:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.822 11:40:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.822 11:40:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:54.083 ************************************ 00:04:54.083 START TEST locking_overlapped_coremask 00:04:54.083 ************************************ 00:04:54.083 11:40:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:54.083 11:40:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=4013921 00:04:54.083 11:40:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 4013921 /var/tmp/spdk.sock 00:04:54.083 11:40:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:54.083 11:40:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 4013921 ']' 00:04:54.083 11:40:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.083 11:40:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.083 11:40:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.083 11:40:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.083 11:40:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.083 [2024-12-09 11:40:01.799119] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:54.083 [2024-12-09 11:40:01.799172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4013921 ] 00:04:54.083 [2024-12-09 11:40:01.881988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:54.083 [2024-12-09 11:40:01.914127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.083 [2024-12-09 11:40:01.914241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.083 [2024-12-09 11:40:01.914243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.026 11:40:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.026 11:40:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:55.026 11:40:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=4013959 00:04:55.026 11:40:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:55.026 11:40:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 4013959 /var/tmp/spdk2.sock 00:04:55.026 11:40:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:55.026 11:40:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 4013959 /var/tmp/spdk2.sock 00:04:55.026 11:40:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:55.026 11:40:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.026 11:40:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:55.026 11:40:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.026 11:40:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 4013959 /var/tmp/spdk2.sock 00:04:55.026 11:40:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 4013959 ']' 00:04:55.026 11:40:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.026 11:40:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.026 11:40:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.026 11:40:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.026 11:40:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.026 [2024-12-09 11:40:02.641063] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:55.026 [2024-12-09 11:40:02.641116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4013959 ] 00:04:55.026 [2024-12-09 11:40:02.729486] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4013921 has claimed it. 00:04:55.026 [2024-12-09 11:40:02.729521] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:55.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (4013959) - No such process 00:04:55.597 ERROR: process (pid: 4013959) is no longer running 00:04:55.597 11:40:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.597 11:40:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:55.597 11:40:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:55.597 11:40:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:55.597 11:40:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:55.597 11:40:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:55.597 11:40:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:55.598 11:40:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:55.598 11:40:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:55.598 11:40:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:55.598 11:40:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 4013921 00:04:55.598 11:40:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 4013921 ']' 00:04:55.598 11:40:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 4013921 00:04:55.598 11:40:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:55.598 11:40:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.598 11:40:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4013921 00:04:55.598 11:40:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.598 11:40:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.598 11:40:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4013921' 00:04:55.598 killing process with pid 4013921 00:04:55.598 11:40:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 4013921 00:04:55.598 11:40:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 4013921 00:04:55.858 00:04:55.858 real 0m1.778s 00:04:55.858 user 0m5.162s 00:04:55.858 sys 0m0.381s 00:04:55.858 11:40:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.858 11:40:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.858 ************************************ 00:04:55.858 END TEST locking_overlapped_coremask 00:04:55.858 ************************************ 00:04:55.858 11:40:03 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:55.858 11:40:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.858 11:40:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.858 11:40:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.858 ************************************ 00:04:55.858 START TEST locking_overlapped_coremask_via_rpc 00:04:55.858 ************************************ 00:04:55.858 11:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:55.858 11:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=4014301 00:04:55.858 11:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 4014301 /var/tmp/spdk.sock 00:04:55.858 11:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:55.858 11:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4014301 ']' 00:04:55.858 11:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.858 11:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.858 11:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.858 11:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.858 11:40:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.858 [2024-12-09 11:40:03.654571] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:55.858 [2024-12-09 11:40:03.654623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4014301 ] 00:04:55.858 [2024-12-09 11:40:03.736751] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:55.858 [2024-12-09 11:40:03.736772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:56.118 [2024-12-09 11:40:03.769284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.118 [2024-12-09 11:40:03.769400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.118 [2024-12-09 11:40:03.769401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.689 11:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.689 11:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:56.689 11:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:56.689 11:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=4014388 00:04:56.689 11:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 4014388 /var/tmp/spdk2.sock 00:04:56.689 11:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4014388 ']' 00:04:56.689 11:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:56.689 11:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.689 11:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:56.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:56.689 11:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.689 11:40:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.689 [2024-12-09 11:40:04.472778] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:56.689 [2024-12-09 11:40:04.472831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4014388 ] 00:04:56.689 [2024-12-09 11:40:04.559343] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:56.689 [2024-12-09 11:40:04.559363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:56.949 [2024-12-09 11:40:04.618345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:56.949 [2024-12-09 11:40:04.621709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.949 [2024-12-09 11:40:04.621711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.520 [2024-12-09 11:40:05.290696] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4014301 has claimed it. 00:04:57.520 request: 00:04:57.520 { 00:04:57.520 "method": "framework_enable_cpumask_locks", 00:04:57.520 "req_id": 1 00:04:57.520 } 00:04:57.520 Got JSON-RPC error response 00:04:57.520 response: 00:04:57.520 { 00:04:57.520 "code": -32603, 00:04:57.520 "message": "Failed to claim CPU core: 2" 00:04:57.520 } 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 4014301 /var/tmp/spdk.sock 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4014301 ']' 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.520 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.781 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.781 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:57.781 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 4014388 /var/tmp/spdk2.sock 00:04:57.781 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4014388 ']' 00:04:57.781 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:57.781 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.781 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:57.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:57.781 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.781 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.781 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.781 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:57.781 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:57.781 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:57.781 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:57.781 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:57.781 00:04:57.781 real 0m2.068s 00:04:57.781 user 0m0.838s 00:04:57.781 sys 0m0.161s 00:04:57.781 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.781 11:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.781 ************************************ 00:04:57.781 END TEST locking_overlapped_coremask_via_rpc 00:04:57.781 ************************************ 00:04:58.041 11:40:05 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:58.041 11:40:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4014301 ]] 00:04:58.041 11:40:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4014301 00:04:58.041 11:40:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4014301 ']' 00:04:58.041 11:40:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4014301 00:04:58.041 11:40:05 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:58.041 11:40:05 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.041 11:40:05 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4014301 00:04:58.041 11:40:05 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.042 11:40:05 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.042 11:40:05 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4014301' 00:04:58.042 killing process with pid 4014301 00:04:58.042 11:40:05 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 4014301 00:04:58.042 11:40:05 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 4014301 00:04:58.303 11:40:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4014388 ]] 00:04:58.303 11:40:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4014388 00:04:58.303 11:40:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4014388 ']' 00:04:58.303 11:40:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4014388 00:04:58.303 11:40:05 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:58.303 11:40:05 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.303 11:40:05 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4014388 00:04:58.303 11:40:06 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:58.303 11:40:06 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:58.303 11:40:06 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4014388' 00:04:58.303 killing process with pid 4014388 00:04:58.303 11:40:06 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 4014388 00:04:58.303 11:40:06 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 4014388 00:04:58.565 11:40:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:58.565 11:40:06 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:58.565 11:40:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4014301 ]] 00:04:58.565 11:40:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4014301 00:04:58.565 11:40:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4014301 ']' 00:04:58.565 11:40:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4014301 00:04:58.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4014301) - No such process 00:04:58.565 11:40:06 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 4014301 is not found' 00:04:58.565 Process with pid 4014301 is not found 00:04:58.565 11:40:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4014388 ]] 00:04:58.565 11:40:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4014388 00:04:58.565 11:40:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4014388 ']' 00:04:58.565 11:40:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4014388 00:04:58.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4014388) - No such process 00:04:58.565 11:40:06 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 4014388 is not found' 00:04:58.565 Process with pid 4014388 is not found 00:04:58.565 11:40:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:58.565 00:04:58.565 real 0m16.005s 00:04:58.565 user 0m28.164s 00:04:58.565 sys 0m4.849s 00:04:58.565 11:40:06 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.565 11:40:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.565 ************************************ 00:04:58.565 END TEST cpu_locks 00:04:58.565 ************************************ 00:04:58.565 00:04:58.565 real 0m40.180s 00:04:58.565 user 1m16.326s 00:04:58.565 sys 0m8.157s 00:04:58.565 11:40:06 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.565 11:40:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.565 ************************************ 00:04:58.565 END TEST event 00:04:58.565 ************************************ 00:04:58.565 11:40:06 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:58.565 11:40:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.565 11:40:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.565 11:40:06 -- common/autotest_common.sh@10 -- # set +x 00:04:58.565 ************************************ 00:04:58.565 START TEST thread 00:04:58.565 ************************************ 00:04:58.565 11:40:06 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:58.565 * Looking for test storage... 00:04:58.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:58.826 11:40:06 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:58.826 11:40:06 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:04:58.826 11:40:06 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.826 11:40:06 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.826 11:40:06 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.826 11:40:06 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.826 11:40:06 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.826 11:40:06 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.826 11:40:06 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.826 11:40:06 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.826 11:40:06 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.826 11:40:06 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.826 11:40:06 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.826 11:40:06 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.826 11:40:06 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.826 11:40:06 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:58.826 11:40:06 thread -- scripts/common.sh@345 -- # : 1 00:04:58.826 11:40:06 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.826 11:40:06 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.826 11:40:06 thread -- scripts/common.sh@365 -- # decimal 1 00:04:58.826 11:40:06 thread -- scripts/common.sh@353 -- # local d=1 00:04:58.826 11:40:06 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.826 11:40:06 thread -- scripts/common.sh@355 -- # echo 1 00:04:58.826 11:40:06 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.826 11:40:06 thread -- scripts/common.sh@366 -- # decimal 2 00:04:58.826 11:40:06 thread -- scripts/common.sh@353 -- # local d=2 00:04:58.826 11:40:06 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.826 11:40:06 thread -- scripts/common.sh@355 -- # echo 2 00:04:58.826 11:40:06 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.826 11:40:06 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.826 11:40:06 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.826 11:40:06 thread -- scripts/common.sh@368 -- # return 0 00:04:58.826 11:40:06 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.826 11:40:06 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.826 --rc genhtml_branch_coverage=1 00:04:58.826 --rc genhtml_function_coverage=1 00:04:58.826 --rc genhtml_legend=1 00:04:58.826 --rc geninfo_all_blocks=1 00:04:58.826 --rc geninfo_unexecuted_blocks=1 00:04:58.826 00:04:58.826 ' 00:04:58.826 11:40:06 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.826 --rc genhtml_branch_coverage=1 00:04:58.827 --rc genhtml_function_coverage=1 00:04:58.827 --rc genhtml_legend=1 00:04:58.827 --rc geninfo_all_blocks=1 00:04:58.827 --rc geninfo_unexecuted_blocks=1 00:04:58.827 00:04:58.827 ' 00:04:58.827 11:40:06 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.827 --rc genhtml_branch_coverage=1 00:04:58.827 --rc genhtml_function_coverage=1 00:04:58.827 --rc genhtml_legend=1 00:04:58.827 --rc geninfo_all_blocks=1 00:04:58.827 --rc geninfo_unexecuted_blocks=1 00:04:58.827 00:04:58.827 ' 00:04:58.827 11:40:06 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.827 --rc genhtml_branch_coverage=1 00:04:58.827 --rc genhtml_function_coverage=1 00:04:58.827 --rc genhtml_legend=1 00:04:58.827 --rc geninfo_all_blocks=1 00:04:58.827 --rc geninfo_unexecuted_blocks=1 00:04:58.827 00:04:58.827 ' 00:04:58.827 11:40:06 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:58.827 11:40:06 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:58.827 11:40:06 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.827 11:40:06 thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.827 ************************************ 00:04:58.827 START TEST thread_poller_perf 00:04:58.827 ************************************ 00:04:58.827 11:40:06 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:58.827 [2024-12-09 11:40:06.617025] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:04:58.827 [2024-12-09 11:40:06.617122] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015078 ] 00:04:58.827 [2024-12-09 11:40:06.708141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.087 [2024-12-09 11:40:06.748401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.087 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:00.027 [2024-12-09T10:40:07.913Z] ====================================== 00:05:00.027 [2024-12-09T10:40:07.913Z] busy:2406885252 (cyc) 00:05:00.027 [2024-12-09T10:40:07.913Z] total_run_count: 419000 00:05:00.027 [2024-12-09T10:40:07.913Z] tsc_hz: 2400000000 (cyc) 00:05:00.027 [2024-12-09T10:40:07.913Z] ====================================== 00:05:00.027 [2024-12-09T10:40:07.913Z] poller_cost: 5744 (cyc), 2393 (nsec) 00:05:00.027 00:05:00.027 real 0m1.186s 00:05:00.027 user 0m1.089s 00:05:00.027 sys 0m0.093s 00:05:00.027 11:40:07 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.027 11:40:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:00.027 ************************************ 00:05:00.027 END TEST thread_poller_perf 00:05:00.027 ************************************ 00:05:00.027 11:40:07 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:00.027 11:40:07 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:00.027 11:40:07 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.027 11:40:07 thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.027 ************************************ 00:05:00.027 START TEST thread_poller_perf 00:05:00.027 ************************************ 00:05:00.027 11:40:07 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:00.027 [2024-12-09 11:40:07.879744] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:05:00.027 [2024-12-09 11:40:07.879844] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015233 ] 00:05:00.287 [2024-12-09 11:40:07.940236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.287 [2024-12-09 11:40:07.971138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.287 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:01.230 [2024-12-09T10:40:09.116Z] ====================================== 00:05:01.230 [2024-12-09T10:40:09.116Z] busy:2401422774 (cyc) 00:05:01.230 [2024-12-09T10:40:09.116Z] total_run_count: 5049000 00:05:01.230 [2024-12-09T10:40:09.116Z] tsc_hz: 2400000000 (cyc) 00:05:01.230 [2024-12-09T10:40:09.116Z] ====================================== 00:05:01.230 [2024-12-09T10:40:09.116Z] poller_cost: 475 (cyc), 197 (nsec) 00:05:01.230 00:05:01.230 real 0m1.141s 00:05:01.230 user 0m1.081s 00:05:01.230 sys 0m0.056s 00:05:01.230 11:40:08 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.230 11:40:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:01.230 ************************************ 00:05:01.230 END TEST thread_poller_perf 00:05:01.230 ************************************ 00:05:01.230 11:40:09 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:01.230 00:05:01.230 real 0m2.688s 00:05:01.230 user 0m2.339s 00:05:01.230 sys 0m0.365s 00:05:01.230 11:40:09 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.230 11:40:09 thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.230 ************************************ 00:05:01.230 END TEST thread 00:05:01.230 ************************************ 00:05:01.230 11:40:09 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:01.230 11:40:09 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:01.230 11:40:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.230 11:40:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.230 11:40:09 -- common/autotest_common.sh@10 -- # set +x 00:05:01.492 ************************************ 00:05:01.492 START TEST app_cmdline 00:05:01.492 ************************************ 00:05:01.492 11:40:09 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:01.492 * Looking for test storage... 00:05:01.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:01.492 11:40:09 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:01.492 11:40:09 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:01.492 11:40:09 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:01.492 11:40:09 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:01.492 11:40:09 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.492 11:40:09 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.492 11:40:09 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.492 11:40:09 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.492 11:40:09 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.492 11:40:09 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.492 11:40:09 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.492 11:40:09 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.492 11:40:09 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.492 11:40:09 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.492 11:40:09 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.492 11:40:09 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:01.492 11:40:09 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:01.492 11:40:09 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.492 11:40:09 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.492 11:40:09 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:01.492 11:40:09 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:01.493 11:40:09 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.493 11:40:09 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:01.493 11:40:09 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.493 11:40:09 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:01.493 11:40:09 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:01.493 11:40:09 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.493 11:40:09 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:01.493 11:40:09 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.493 11:40:09 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.493 11:40:09 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.493 11:40:09 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:01.493 11:40:09 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.493 11:40:09 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:01.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.493 --rc genhtml_branch_coverage=1 00:05:01.493 --rc genhtml_function_coverage=1 00:05:01.493 --rc genhtml_legend=1 00:05:01.493 --rc geninfo_all_blocks=1 00:05:01.493 --rc geninfo_unexecuted_blocks=1 00:05:01.493 00:05:01.493 ' 00:05:01.493 11:40:09 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:01.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.493 --rc genhtml_branch_coverage=1 00:05:01.493 --rc genhtml_function_coverage=1 00:05:01.493 --rc genhtml_legend=1 00:05:01.493 --rc geninfo_all_blocks=1 00:05:01.493 --rc geninfo_unexecuted_blocks=1 00:05:01.493 00:05:01.493 ' 00:05:01.493 11:40:09 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:01.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.493 --rc genhtml_branch_coverage=1 00:05:01.493 --rc genhtml_function_coverage=1 00:05:01.493 --rc genhtml_legend=1 00:05:01.493 --rc geninfo_all_blocks=1 00:05:01.493 --rc geninfo_unexecuted_blocks=1 00:05:01.493 00:05:01.493 ' 00:05:01.493 11:40:09 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:01.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.493 --rc genhtml_branch_coverage=1 00:05:01.493 --rc genhtml_function_coverage=1 00:05:01.493 --rc genhtml_legend=1 00:05:01.493 --rc geninfo_all_blocks=1 00:05:01.493 --rc geninfo_unexecuted_blocks=1 00:05:01.493 00:05:01.493 ' 00:05:01.493 11:40:09 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:01.493 11:40:09 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=4015525 00:05:01.493 11:40:09 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 4015525 00:05:01.493 11:40:09 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:01.493 11:40:09 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 4015525 ']' 00:05:01.493 11:40:09 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.493 11:40:09 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.493 11:40:09 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.493 11:40:09 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.493 11:40:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:01.493 [2024-12-09 11:40:09.372169] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:05:01.493 [2024-12-09 11:40:09.372247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015525 ] 00:05:01.754 [2024-12-09 11:40:09.462151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.754 [2024-12-09 11:40:09.496931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.325 11:40:10 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.325 11:40:10 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:02.325 11:40:10 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:02.586 { 00:05:02.586 "version": "SPDK v25.01-pre git sha1 427915fc6", 00:05:02.586 "fields": { 00:05:02.586 "major": 25, 00:05:02.586 "minor": 1, 00:05:02.586 "patch": 0, 00:05:02.586 "suffix": "-pre", 00:05:02.586 "commit": "427915fc6" 00:05:02.586 } 00:05:02.586 } 00:05:02.586 11:40:10 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:02.586 11:40:10 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:02.586 11:40:10 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:02.586 11:40:10 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:02.586 11:40:10 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:02.586 11:40:10 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:02.586 11:40:10 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:02.586 11:40:10 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.586 11:40:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:02.586 11:40:10 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.586 11:40:10 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:02.586 11:40:10 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:02.586 11:40:10 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:02.586 11:40:10 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:02.586 11:40:10 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:02.586 11:40:10 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:02.586 11:40:10 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.586 11:40:10 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:02.586 11:40:10 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.586 11:40:10 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:02.586 11:40:10 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.586 11:40:10 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:02.586 11:40:10 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:02.586 11:40:10 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:02.847 request: 00:05:02.847 { 00:05:02.847 "method": "env_dpdk_get_mem_stats", 00:05:02.847 "req_id": 1 00:05:02.847 } 00:05:02.847 Got JSON-RPC error response 00:05:02.847 response: 00:05:02.847 { 00:05:02.847 "code": -32601, 00:05:02.847 "message": "Method not found" 00:05:02.847 } 00:05:02.847 11:40:10 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:02.847 11:40:10 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:02.847 11:40:10 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:02.847 11:40:10 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:02.847 11:40:10 app_cmdline -- app/cmdline.sh@1 -- # killprocess 4015525 00:05:02.847 11:40:10 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 4015525 ']' 00:05:02.847 11:40:10 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 4015525 00:05:02.847 11:40:10 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:02.847 11:40:10 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.847 11:40:10 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4015525 00:05:02.847 11:40:10 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.847 11:40:10 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.847 11:40:10 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4015525' 00:05:02.847 killing process with pid 4015525 00:05:02.847 11:40:10 app_cmdline -- common/autotest_common.sh@973 -- # kill 4015525 00:05:02.847 11:40:10 app_cmdline -- common/autotest_common.sh@978 -- # wait 4015525 00:05:03.108 00:05:03.108 real 0m1.688s 00:05:03.108 user 0m2.001s 00:05:03.108 sys 0m0.470s 00:05:03.108 11:40:10 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.108 11:40:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:03.108 ************************************ 00:05:03.108 END TEST app_cmdline 00:05:03.108 ************************************ 00:05:03.108 11:40:10 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:03.108 11:40:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.108 11:40:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.108 11:40:10 -- common/autotest_common.sh@10 -- # set +x 00:05:03.108 ************************************ 00:05:03.108 START TEST version 00:05:03.108 ************************************ 00:05:03.108 11:40:10 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:03.108 * Looking for test storage... 00:05:03.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:03.108 11:40:10 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.108 11:40:10 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.108 11:40:10 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.370 11:40:11 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.370 11:40:11 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.370 11:40:11 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.370 11:40:11 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.370 11:40:11 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.370 11:40:11 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.370 11:40:11 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.370 11:40:11 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.370 11:40:11 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.370 11:40:11 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.370 11:40:11 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.370 11:40:11 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.370 11:40:11 version -- scripts/common.sh@344 -- # case "$op" in 00:05:03.370 11:40:11 version -- scripts/common.sh@345 -- # : 1 00:05:03.370 11:40:11 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.370 11:40:11 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.370 11:40:11 version -- scripts/common.sh@365 -- # decimal 1 00:05:03.370 11:40:11 version -- scripts/common.sh@353 -- # local d=1 00:05:03.370 11:40:11 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.370 11:40:11 version -- scripts/common.sh@355 -- # echo 1 00:05:03.370 11:40:11 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.370 11:40:11 version -- scripts/common.sh@366 -- # decimal 2 00:05:03.370 11:40:11 version -- scripts/common.sh@353 -- # local d=2 00:05:03.370 11:40:11 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.370 11:40:11 version -- scripts/common.sh@355 -- # echo 2 00:05:03.370 11:40:11 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.370 11:40:11 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.370 11:40:11 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.370 11:40:11 version -- scripts/common.sh@368 -- # return 0 00:05:03.370 11:40:11 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.370 11:40:11 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.370 --rc genhtml_branch_coverage=1 00:05:03.370 --rc genhtml_function_coverage=1 00:05:03.370 --rc genhtml_legend=1 00:05:03.370 --rc geninfo_all_blocks=1 00:05:03.370 --rc geninfo_unexecuted_blocks=1 00:05:03.370 00:05:03.370 ' 00:05:03.370 11:40:11 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.370 --rc genhtml_branch_coverage=1 00:05:03.370 --rc genhtml_function_coverage=1 00:05:03.370 --rc genhtml_legend=1 00:05:03.370 --rc geninfo_all_blocks=1 00:05:03.370 --rc geninfo_unexecuted_blocks=1 00:05:03.370 00:05:03.370 ' 00:05:03.370 11:40:11 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.370 --rc genhtml_branch_coverage=1 00:05:03.370 --rc genhtml_function_coverage=1 00:05:03.370 --rc genhtml_legend=1 00:05:03.370 --rc geninfo_all_blocks=1 00:05:03.370 --rc geninfo_unexecuted_blocks=1 00:05:03.370 00:05:03.370 ' 00:05:03.370 11:40:11 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.370 --rc genhtml_branch_coverage=1 00:05:03.370 --rc genhtml_function_coverage=1 00:05:03.370 --rc genhtml_legend=1 00:05:03.370 --rc geninfo_all_blocks=1 00:05:03.370 --rc geninfo_unexecuted_blocks=1 00:05:03.370 00:05:03.370 ' 00:05:03.370 11:40:11 version -- app/version.sh@17 -- # get_header_version major 00:05:03.370 11:40:11 version -- app/version.sh@14 -- # cut -f2 00:05:03.370 11:40:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:03.370 11:40:11 version -- app/version.sh@14 -- # tr -d '"' 00:05:03.371 11:40:11 version -- app/version.sh@17 -- # major=25 00:05:03.371 11:40:11 version -- app/version.sh@18 -- # get_header_version minor 00:05:03.371 11:40:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:03.371 11:40:11 version -- app/version.sh@14 -- # cut -f2 00:05:03.371 11:40:11 version -- app/version.sh@14 -- # tr -d '"' 00:05:03.371 11:40:11 version -- app/version.sh@18 -- # minor=1 00:05:03.371 11:40:11 version -- app/version.sh@19 -- # get_header_version patch 00:05:03.371 11:40:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:03.371 11:40:11 version -- app/version.sh@14 -- # cut -f2 00:05:03.371 11:40:11 version -- app/version.sh@14 -- # tr -d '"' 00:05:03.371 11:40:11 version -- app/version.sh@19 -- # patch=0 00:05:03.371 11:40:11 version -- app/version.sh@20 -- # get_header_version suffix 00:05:03.371 11:40:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:03.371 11:40:11 version -- app/version.sh@14 -- # cut -f2 00:05:03.371 11:40:11 version -- app/version.sh@14 -- # tr -d '"' 00:05:03.371 11:40:11 version -- app/version.sh@20 -- # suffix=-pre 00:05:03.371 11:40:11 version -- app/version.sh@22 -- # version=25.1 00:05:03.371 11:40:11 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:03.371 11:40:11 version -- app/version.sh@28 -- # version=25.1rc0 00:05:03.371 11:40:11 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:03.371 11:40:11 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:03.371 11:40:11 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:03.371 11:40:11 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:03.371 00:05:03.371 real 0m0.278s 00:05:03.371 user 0m0.168s 00:05:03.371 sys 0m0.157s 00:05:03.371 11:40:11 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.371 11:40:11 version -- common/autotest_common.sh@10 -- # set +x 00:05:03.371 ************************************ 00:05:03.371 END TEST version 00:05:03.371 ************************************ 00:05:03.371 11:40:11 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:03.371 11:40:11 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:03.371 11:40:11 -- spdk/autotest.sh@194 -- # uname -s 00:05:03.371 11:40:11 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:03.371 11:40:11 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:03.371 11:40:11 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:03.371 11:40:11 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:03.371 11:40:11 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:03.371 11:40:11 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:03.371 11:40:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:03.371 11:40:11 -- common/autotest_common.sh@10 -- # set +x 00:05:03.371 11:40:11 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:03.371 11:40:11 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:03.371 11:40:11 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:03.371 11:40:11 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:03.633 11:40:11 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:03.633 11:40:11 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:03.633 11:40:11 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:03.633 11:40:11 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:03.633 11:40:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.633 11:40:11 -- common/autotest_common.sh@10 -- # set +x 00:05:03.633 ************************************ 00:05:03.633 START TEST nvmf_tcp 00:05:03.633 ************************************ 00:05:03.633 11:40:11 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:03.633 * Looking for test storage... 00:05:03.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:03.633 11:40:11 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.633 11:40:11 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.633 11:40:11 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.633 11:40:11 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.633 11:40:11 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:03.633 11:40:11 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.633 11:40:11 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.633 --rc genhtml_branch_coverage=1 00:05:03.633 --rc genhtml_function_coverage=1 00:05:03.633 --rc genhtml_legend=1 00:05:03.633 --rc geninfo_all_blocks=1 00:05:03.633 --rc geninfo_unexecuted_blocks=1 00:05:03.633 00:05:03.633 ' 00:05:03.633 11:40:11 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.633 --rc genhtml_branch_coverage=1 00:05:03.633 --rc genhtml_function_coverage=1 00:05:03.633 --rc genhtml_legend=1 00:05:03.633 --rc geninfo_all_blocks=1 00:05:03.633 --rc geninfo_unexecuted_blocks=1 00:05:03.633 00:05:03.633 ' 00:05:03.633 11:40:11 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.633 --rc genhtml_branch_coverage=1 00:05:03.633 --rc genhtml_function_coverage=1 00:05:03.633 --rc genhtml_legend=1 00:05:03.633 --rc geninfo_all_blocks=1 00:05:03.633 --rc geninfo_unexecuted_blocks=1 00:05:03.633 00:05:03.633 ' 00:05:03.633 11:40:11 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.633 --rc genhtml_branch_coverage=1 00:05:03.633 --rc genhtml_function_coverage=1 00:05:03.633 --rc genhtml_legend=1 00:05:03.633 --rc geninfo_all_blocks=1 00:05:03.633 --rc geninfo_unexecuted_blocks=1 00:05:03.633 00:05:03.633 ' 00:05:03.633 11:40:11 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:03.633 11:40:11 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:03.633 11:40:11 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:03.633 11:40:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:03.633 11:40:11 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.633 11:40:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.895 ************************************ 00:05:03.896 START TEST nvmf_target_core 00:05:03.896 ************************************ 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:03.896 * Looking for test storage... 00:05:03.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.896 --rc genhtml_branch_coverage=1 00:05:03.896 --rc genhtml_function_coverage=1 00:05:03.896 --rc genhtml_legend=1 00:05:03.896 --rc geninfo_all_blocks=1 00:05:03.896 --rc geninfo_unexecuted_blocks=1 00:05:03.896 00:05:03.896 ' 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.896 --rc genhtml_branch_coverage=1 00:05:03.896 --rc genhtml_function_coverage=1 00:05:03.896 --rc genhtml_legend=1 00:05:03.896 --rc geninfo_all_blocks=1 00:05:03.896 --rc geninfo_unexecuted_blocks=1 00:05:03.896 00:05:03.896 ' 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.896 --rc genhtml_branch_coverage=1 00:05:03.896 --rc genhtml_function_coverage=1 00:05:03.896 --rc genhtml_legend=1 00:05:03.896 --rc geninfo_all_blocks=1 00:05:03.896 --rc geninfo_unexecuted_blocks=1 00:05:03.896 00:05:03.896 ' 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.896 --rc genhtml_branch_coverage=1 00:05:03.896 --rc genhtml_function_coverage=1 00:05:03.896 --rc genhtml_legend=1 00:05:03.896 --rc geninfo_all_blocks=1 00:05:03.896 --rc geninfo_unexecuted_blocks=1 00:05:03.896 00:05:03.896 ' 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # : 0 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:05:03.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@56 -- # have_pci_nics=0 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:03.896 11:40:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:04.159 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:04.159 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.159 11:40:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:04.159 ************************************ 00:05:04.159 START TEST nvmf_abort 00:05:04.159 ************************************ 00:05:04.159 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:04.159 * Looking for test storage... 00:05:04.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:04.159 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:04.159 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:04.159 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:04.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.159 --rc genhtml_branch_coverage=1 00:05:04.159 --rc genhtml_function_coverage=1 00:05:04.159 --rc genhtml_legend=1 00:05:04.159 --rc geninfo_all_blocks=1 00:05:04.159 --rc geninfo_unexecuted_blocks=1 00:05:04.159 00:05:04.159 ' 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:04.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.159 --rc genhtml_branch_coverage=1 00:05:04.159 --rc genhtml_function_coverage=1 00:05:04.159 --rc genhtml_legend=1 00:05:04.159 --rc geninfo_all_blocks=1 00:05:04.159 --rc geninfo_unexecuted_blocks=1 00:05:04.159 00:05:04.159 ' 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:04.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.159 --rc genhtml_branch_coverage=1 00:05:04.159 --rc genhtml_function_coverage=1 00:05:04.159 --rc genhtml_legend=1 00:05:04.159 --rc geninfo_all_blocks=1 00:05:04.159 --rc geninfo_unexecuted_blocks=1 00:05:04.159 00:05:04.159 ' 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:04.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.159 --rc genhtml_branch_coverage=1 00:05:04.159 --rc genhtml_function_coverage=1 00:05:04.159 --rc genhtml_legend=1 00:05:04.159 --rc geninfo_all_blocks=1 00:05:04.159 --rc geninfo_unexecuted_blocks=1 00:05:04.159 00:05:04.159 ' 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:04.159 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # : 0 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:05:04.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@56 -- # have_pci_nics=0 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # xtrace_disable 00:05:04.422 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_devs=() 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_devs 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_net_devs=() 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # pci_drivers=() 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # local -A pci_drivers 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # net_devs=() 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga net_devs 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # e810=() 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga e810 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # x722=() 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga x722 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@323 -- # mlx=() 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@323 -- # local -ga mlx 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:12.575 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:12.575 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:12.575 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:05:12.575 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:12.576 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:05:12.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:12.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:05:12.576 00:05:12.576 --- 10.0.0.2 ping statistics --- 00:05:12.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:12.576 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:12.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:12.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:05:12.576 00:05:12.576 --- 10.0.0.1 ping statistics --- 00:05:12.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:12.576 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=4020004 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 4020004 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 4020004 ']' 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.576 11:40:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.576 [2024-12-09 11:40:19.613023] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:05:12.576 [2024-12-09 11:40:19.613094] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:12.576 [2024-12-09 11:40:19.711837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:12.576 [2024-12-09 11:40:19.765829] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:12.576 [2024-12-09 11:40:19.765881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:12.576 [2024-12-09 11:40:19.765891] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:12.576 [2024-12-09 11:40:19.765898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:12.576 [2024-12-09 11:40:19.765904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:12.576 [2024-12-09 11:40:19.767681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.576 [2024-12-09 11:40:19.767901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.576 [2024-12-09 11:40:19.767998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.576 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.576 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:12.576 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:05:12.576 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.576 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.576 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:12.576 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:12.576 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.576 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.576 [2024-12-09 11:40:20.452769] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:12.576 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.576 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:12.576 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.576 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.838 Malloc0 00:05:12.838 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.838 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:12.838 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.838 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.838 Delay0 00:05:12.838 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.838 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:12.838 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.838 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.838 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.838 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:12.838 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.838 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.838 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.838 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:12.838 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.838 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.838 [2024-12-09 11:40:20.529753] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:12.838 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.838 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:12.838 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.838 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.838 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.838 11:40:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:12.838 [2024-12-09 11:40:20.668021] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:15.387 Initializing NVMe Controllers 00:05:15.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:15.387 controller IO queue size 128 less than required 00:05:15.387 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:15.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:15.387 Initialization complete. Launching workers. 00:05:15.387 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29018 00:05:15.387 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29079, failed to submit 62 00:05:15.387 success 29022, unsuccessful 57, failed 0 00:05:15.387 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:15.387 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.387 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:15.387 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.387 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:15.387 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:15.387 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:05:15.387 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # sync 00:05:15.387 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:05:15.387 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # set +e 00:05:15.387 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # for i in {1..20} 00:05:15.387 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:05:15.387 rmmod nvme_tcp 00:05:15.387 rmmod nvme_fabrics 00:05:15.387 rmmod nvme_keyring 00:05:15.387 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:05:15.387 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # set -e 00:05:15.387 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@130 -- # return 0 00:05:15.387 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 4020004 ']' 00:05:15.387 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 4020004 00:05:15.388 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 4020004 ']' 00:05:15.388 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 4020004 00:05:15.388 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:15.388 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.388 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4020004 00:05:15.388 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:15.388 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:15.388 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4020004' 00:05:15.388 killing process with pid 4020004 00:05:15.388 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 4020004 00:05:15.388 11:40:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 4020004 00:05:15.388 11:40:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:05:15.388 11:40:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:05:15.388 11:40:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:05:15.388 11:40:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # iptr 00:05:15.388 11:40:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:05:15.388 11:40:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:05:15.388 11:40:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:05:15.388 11:40:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:15.388 11:40:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # remove_spdk_ns 00:05:15.388 11:40:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:15.388 11:40:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:15.388 11:40:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:17.308 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:05:17.308 00:05:17.308 real 0m13.352s 00:05:17.308 user 0m14.079s 00:05:17.308 sys 0m6.544s 00:05:17.308 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.308 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.308 ************************************ 00:05:17.308 END TEST nvmf_abort 00:05:17.308 ************************************ 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:17.573 ************************************ 00:05:17.573 START TEST nvmf_ns_hotplug_stress 00:05:17.573 ************************************ 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:17.573 * Looking for test storage... 00:05:17.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:17.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.573 --rc genhtml_branch_coverage=1 00:05:17.573 --rc genhtml_function_coverage=1 00:05:17.573 --rc genhtml_legend=1 00:05:17.573 --rc geninfo_all_blocks=1 00:05:17.573 --rc geninfo_unexecuted_blocks=1 00:05:17.573 00:05:17.573 ' 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:17.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.573 --rc genhtml_branch_coverage=1 00:05:17.573 --rc genhtml_function_coverage=1 00:05:17.573 --rc genhtml_legend=1 00:05:17.573 --rc geninfo_all_blocks=1 00:05:17.573 --rc geninfo_unexecuted_blocks=1 00:05:17.573 00:05:17.573 ' 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:17.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.573 --rc genhtml_branch_coverage=1 00:05:17.573 --rc genhtml_function_coverage=1 00:05:17.573 --rc genhtml_legend=1 00:05:17.573 --rc geninfo_all_blocks=1 00:05:17.573 --rc geninfo_unexecuted_blocks=1 00:05:17.573 00:05:17.573 ' 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:17.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.573 --rc genhtml_branch_coverage=1 00:05:17.573 --rc genhtml_function_coverage=1 00:05:17.573 --rc genhtml_legend=1 00:05:17.573 --rc geninfo_all_blocks=1 00:05:17.573 --rc geninfo_unexecuted_blocks=1 00:05:17.573 00:05:17.573 ' 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:17.573 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # : 0 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:05:17.834 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@56 -- # have_pci_nics=0 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # xtrace_disable 00:05:17.834 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_devs=() 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_devs 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_net_devs=() 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # pci_drivers=() 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # local -A pci_drivers 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # net_devs=() 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga net_devs 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # e810=() 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga e810 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # x722=() 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga x722 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # mlx=() 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # local -ga mlx 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:26.033 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:26.033 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:26.033 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:26.034 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:26.034 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:05:26.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:26.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:05:26.034 00:05:26.034 --- 10.0.0.2 ping statistics --- 00:05:26.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:26.034 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:26.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:26.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:05:26.034 00:05:26.034 --- 10.0.0.1 ping statistics --- 00:05:26.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:26.034 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=4025038 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 4025038 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 4025038 ']' 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.034 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:26.034 [2024-12-09 11:40:32.947313] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:05:26.034 [2024-12-09 11:40:32.947380] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:26.034 [2024-12-09 11:40:33.046354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.034 [2024-12-09 11:40:33.097014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:26.034 [2024-12-09 11:40:33.097067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:26.034 [2024-12-09 11:40:33.097075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:26.034 [2024-12-09 11:40:33.097083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:26.034 [2024-12-09 11:40:33.097089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:26.034 [2024-12-09 11:40:33.098859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.034 [2024-12-09 11:40:33.099160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.034 [2024-12-09 11:40:33.099162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.034 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.034 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:26.034 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:05:26.034 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.034 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:26.034 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:26.034 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:26.034 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:26.310 [2024-12-09 11:40:33.951657] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:26.310 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:26.310 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:26.575 [2024-12-09 11:40:34.305100] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:26.575 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:26.836 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:26.836 Malloc0 00:05:26.836 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:27.097 Delay0 00:05:27.098 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.359 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:27.359 NULL1 00:05:27.359 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:27.618 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4025425 00:05:27.618 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:27.618 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:27.618 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.878 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.138 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:28.138 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:28.138 true 00:05:28.138 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:28.138 11:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.398 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.658 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:28.658 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:28.658 true 00:05:28.658 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:28.659 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.920 Read completed with error (sct=0, sc=11) 00:05:28.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.920 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.180 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:29.180 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:29.180 true 00:05:29.180 11:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:29.180 11:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.124 11:40:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.385 11:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:30.385 11:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:30.385 true 00:05:30.385 11:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:30.385 11:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.645 11:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.905 11:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:30.905 11:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:30.905 true 00:05:31.165 11:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:31.165 11:40:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.107 11:40:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.368 11:40:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:32.368 11:40:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:32.368 true 00:05:32.629 11:40:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:32.629 11:40:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.571 11:40:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.571 11:40:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:33.571 11:40:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:33.571 true 00:05:33.832 11:40:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:33.832 11:40:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.832 11:40:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.092 11:40:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:34.092 11:40:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:34.353 true 00:05:34.353 11:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:34.353 11:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.353 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.353 11:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.353 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.353 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.614 11:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:34.614 11:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:34.875 true 00:05:34.875 11:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:34.875 11:40:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.818 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.818 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:35.818 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:36.079 true 00:05:36.079 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:36.079 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.079 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.341 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:36.341 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:36.601 true 00:05:36.601 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:36.601 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.983 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.983 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:37.983 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:37.983 true 00:05:37.983 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:37.983 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.923 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.184 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:39.184 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:39.184 true 00:05:39.184 11:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:39.184 11:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.445 11:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.708 11:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:39.708 11:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:39.708 true 00:05:39.969 11:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:39.969 11:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.912 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.174 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:41.174 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:41.436 true 00:05:41.436 11:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:41.436 11:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.378 11:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.378 11:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:42.378 11:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:42.640 true 00:05:42.640 11:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:42.640 11:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.901 11:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.901 11:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:42.901 11:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:43.161 true 00:05:43.161 11:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:43.161 11:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.545 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.545 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:44.545 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:44.545 true 00:05:44.805 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:44.805 11:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.747 11:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.747 11:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:45.747 11:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:45.747 true 00:05:46.007 11:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:46.007 11:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.008 11:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.269 11:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:46.269 11:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:46.530 true 00:05:46.530 11:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:46.530 11:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.914 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.914 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:47.914 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:47.914 true 00:05:47.914 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:47.914 11:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.859 11:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.119 11:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:49.119 11:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:49.119 true 00:05:49.119 11:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:49.119 11:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.379 11:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.640 11:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:49.640 11:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:49.640 true 00:05:49.640 11:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:49.640 11:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.901 11:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.205 11:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:50.205 11:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:50.205 true 00:05:50.205 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:50.205 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.148 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.408 11:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:51.408 11:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:51.408 true 00:05:51.408 11:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:51.408 11:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.668 11:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.930 11:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:51.930 11:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:51.930 true 00:05:51.930 11:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:51.930 11:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.190 11:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.451 11:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:52.451 11:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:52.451 true 00:05:52.712 11:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:52.712 11:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.712 11:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.972 11:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:52.973 11:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:53.233 true 00:05:53.233 11:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:53.233 11:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.617 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.617 11:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.617 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.617 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.617 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.617 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.617 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.617 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.617 11:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:54.617 11:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:54.617 true 00:05:54.617 11:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:54.617 11:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.558 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.558 11:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.558 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.818 11:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:55.818 11:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:55.818 true 00:05:55.818 11:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:55.818 11:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.079 11:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.339 11:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:56.339 11:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:56.339 true 00:05:56.339 11:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:56.339 11:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.724 11:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.724 11:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:57.724 11:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:57.984 true 00:05:57.984 11:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:57.984 11:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.925 Initializing NVMe Controllers 00:05:58.925 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:58.925 Controller IO queue size 128, less than required. 00:05:58.925 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:58.925 Controller IO queue size 128, less than required. 00:05:58.925 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:58.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:58.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:58.925 Initialization complete. Launching workers. 00:05:58.925 ======================================================== 00:05:58.925 Latency(us) 00:05:58.925 Device Information : IOPS MiB/s Average min max 00:05:58.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2612.53 1.28 31861.53 1694.83 1034239.85 00:05:58.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18741.36 9.15 6806.84 1161.79 408093.49 00:05:58.925 ======================================================== 00:05:58.925 Total : 21353.89 10.43 9872.14 1161.79 1034239.85 00:05:58.925 00:05:58.925 11:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.925 11:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:58.925 11:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:59.185 true 00:05:59.185 11:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4025425 00:05:59.185 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4025425) - No such process 00:05:59.185 11:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4025425 00:05:59.185 11:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.185 11:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:59.445 11:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:59.445 11:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:59.445 11:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:59.445 11:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:59.445 11:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:59.706 null0 00:05:59.706 11:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:59.706 11:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:59.706 11:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:59.967 null1 00:05:59.967 11:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:59.967 11:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:59.967 11:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:59.967 null2 00:05:59.967 11:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:59.967 11:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:59.967 11:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:00.227 null3 00:06:00.227 11:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.227 11:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.228 11:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:00.228 null4 00:06:00.228 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.228 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.228 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:00.488 null5 00:06:00.488 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.488 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.488 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:00.749 null6 00:06:00.749 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.749 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.749 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:00.749 null7 00:06:01.010 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.010 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.010 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:01.010 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.010 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.010 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.010 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.010 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:01.010 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:01.010 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.010 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.010 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.010 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:01.010 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:01.010 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.010 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4032224 4032225 4032227 4032229 4032231 4032233 4032235 4032237 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:01.011 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:01.273 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:01.273 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:01.273 11:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:01.273 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.273 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.273 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:01.273 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.273 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.273 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:01.273 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.273 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.273 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:01.273 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.274 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.274 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:01.274 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.274 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.274 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:01.274 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.274 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.274 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:01.274 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.274 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.274 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:01.274 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.274 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.274 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:01.535 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:01.535 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:01.535 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.535 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:01.535 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:01.535 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:01.535 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:01.535 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:01.535 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.535 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.535 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:01.535 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.535 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.535 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:01.535 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.535 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.535 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.851 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:02.192 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.192 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.192 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:02.192 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.192 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.192 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:02.192 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.192 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.192 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:02.192 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.192 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.192 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:02.192 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.192 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.192 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:02.192 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:02.192 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.193 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.193 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:02.193 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:02.193 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.193 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.193 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.193 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:02.193 11:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:02.193 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:02.193 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:02.193 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.193 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.193 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:02.193 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.193 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.193 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.193 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:02.530 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:02.830 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.831 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.101 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.364 11:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.364 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.364 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.364 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.364 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.364 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.364 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.364 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.364 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.364 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.364 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.364 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.364 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.364 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.364 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.364 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.365 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.365 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.365 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.365 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.365 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.625 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.625 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.625 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.625 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.625 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.625 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.625 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.625 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.625 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.625 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.625 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.625 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.626 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.626 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.626 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.626 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.626 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.626 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.626 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.626 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.626 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.626 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.626 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.887 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.887 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.887 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.887 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.887 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.887 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.887 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.887 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.887 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.887 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.887 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.887 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.887 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.887 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.887 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.887 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.887 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.887 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.887 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.887 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.149 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.149 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.149 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.149 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.149 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.149 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.149 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.149 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.149 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.149 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.149 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.149 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.149 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.149 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.149 11:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.149 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.149 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.149 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.149 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.149 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.149 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.149 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.411 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.411 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.411 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.411 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.411 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.411 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.411 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.411 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.411 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.411 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.411 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.411 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.411 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.411 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.411 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.411 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.411 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.411 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.411 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.411 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.411 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.411 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.673 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.673 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.673 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.673 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.673 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.673 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.673 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.673 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.673 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.673 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.673 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.934 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.934 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.934 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:04.934 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:04.934 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:06:04.934 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # sync 00:06:04.934 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:06:04.934 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # set +e 00:06:04.934 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # for i in {1..20} 00:06:04.934 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:06:04.934 rmmod nvme_tcp 00:06:04.934 rmmod nvme_fabrics 00:06:04.934 rmmod nvme_keyring 00:06:04.934 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:06:04.934 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # set -e 00:06:04.934 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@130 -- # return 0 00:06:04.934 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 4025038 ']' 00:06:04.934 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 4025038 00:06:04.934 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 4025038 ']' 00:06:04.934 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 4025038 00:06:04.934 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:04.934 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.934 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4025038 00:06:05.196 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:05.196 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:05.196 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4025038' 00:06:05.196 killing process with pid 4025038 00:06:05.196 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 4025038 00:06:05.196 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 4025038 00:06:05.196 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:06:05.196 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:06:05.196 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:06:05.196 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # iptr 00:06:05.196 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:06:05.196 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:06:05.196 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:06:05.196 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:05.196 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # remove_spdk_ns 00:06:05.196 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:05.196 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:05.196 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:06:07.743 00:06:07.743 real 0m49.772s 00:06:07.743 user 3m16.063s 00:06:07.743 sys 0m16.376s 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:07.743 ************************************ 00:06:07.743 END TEST nvmf_ns_hotplug_stress 00:06:07.743 ************************************ 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:07.743 ************************************ 00:06:07.743 START TEST nvmf_delete_subsystem 00:06:07.743 ************************************ 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:07.743 * Looking for test storage... 00:06:07.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:07.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.743 --rc genhtml_branch_coverage=1 00:06:07.743 --rc genhtml_function_coverage=1 00:06:07.743 --rc genhtml_legend=1 00:06:07.743 --rc geninfo_all_blocks=1 00:06:07.743 --rc geninfo_unexecuted_blocks=1 00:06:07.743 00:06:07.743 ' 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:07.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.743 --rc genhtml_branch_coverage=1 00:06:07.743 --rc genhtml_function_coverage=1 00:06:07.743 --rc genhtml_legend=1 00:06:07.743 --rc geninfo_all_blocks=1 00:06:07.743 --rc geninfo_unexecuted_blocks=1 00:06:07.743 00:06:07.743 ' 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:07.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.743 --rc genhtml_branch_coverage=1 00:06:07.743 --rc genhtml_function_coverage=1 00:06:07.743 --rc genhtml_legend=1 00:06:07.743 --rc geninfo_all_blocks=1 00:06:07.743 --rc geninfo_unexecuted_blocks=1 00:06:07.743 00:06:07.743 ' 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:07.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.743 --rc genhtml_branch_coverage=1 00:06:07.743 --rc genhtml_function_coverage=1 00:06:07.743 --rc genhtml_legend=1 00:06:07.743 --rc geninfo_all_blocks=1 00:06:07.743 --rc geninfo_unexecuted_blocks=1 00:06:07.743 00:06:07.743 ' 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:07.743 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # : 0 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:06:07.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@56 -- # have_pci_nics=0 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # xtrace_disable 00:06:07.744 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_devs=() 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_devs 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_net_devs=() 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # pci_drivers=() 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # local -A pci_drivers 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # net_devs=() 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga net_devs 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # e810=() 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga e810 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # x722=() 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga x722 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # mlx=() 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # local -ga mlx 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:15.890 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:15.890 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:15.890 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:15.890 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:06:15.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:15.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.741 ms 00:06:15.890 00:06:15.890 --- 10.0.0.2 ping statistics --- 00:06:15.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.890 rtt min/avg/max/mdev = 0.741/0.741/0.741/0.000 ms 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:15.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:15.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:06:15.890 00:06:15.890 --- 10.0.0.1 ping statistics --- 00:06:15.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.890 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=4037428 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 4037428 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 4037428 ']' 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.890 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.890 [2024-12-09 11:41:22.740933] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:06:15.890 [2024-12-09 11:41:22.740997] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:15.890 [2024-12-09 11:41:22.841019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.891 [2024-12-09 11:41:22.891975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:15.891 [2024-12-09 11:41:22.892035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:15.891 [2024-12-09 11:41:22.892043] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:15.891 [2024-12-09 11:41:22.892051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:15.891 [2024-12-09 11:41:22.892057] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:15.891 [2024-12-09 11:41:22.893859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.891 [2024-12-09 11:41:22.893990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.891 [2024-12-09 11:41:23.602901] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.891 [2024-12-09 11:41:23.619213] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.891 NULL1 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.891 Delay0 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4037762 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:15.891 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:15.891 [2024-12-09 11:41:23.714109] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:17.802 11:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:17.802 11:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.802 11:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 starting I/O failed: -6 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 starting I/O failed: -6 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 starting I/O failed: -6 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 starting I/O failed: -6 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 starting I/O failed: -6 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 starting I/O failed: -6 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 starting I/O failed: -6 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 starting I/O failed: -6 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 starting I/O failed: -6 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 starting I/O failed: -6 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 starting I/O failed: -6 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 [2024-12-09 11:41:25.957880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208680 is same with the state(6) to be set 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 starting I/O failed: -6 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Write completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.373 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 starting I/O failed: -6 00:06:18.374 starting I/O failed: -6 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 Read completed with error (sct=0, sc=8) 00:06:18.374 Write completed with error (sct=0, sc=8) 00:06:18.374 starting I/O failed: -6 00:06:18.374 [2024-12-09 11:41:25.963166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff49c000c40 is same with the state(6) to be set 00:06:19.314 [2024-12-09 11:41:26.935665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12099b0 is same with the state(6) to be set 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Write completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Write completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Write completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Write completed with error (sct=0, sc=8) 00:06:19.314 Write completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Write completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Write completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Write completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 [2024-12-09 11:41:26.961084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12084a0 is same with the state(6) to be set 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Write completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Write completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Write completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Write completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Read completed with error (sct=0, sc=8) 00:06:19.314 Write completed with error (sct=0, sc=8) 00:06:19.315 [2024-12-09 11:41:26.961607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208860 is same with the state(6) to be set 00:06:19.315 Write completed with error (sct=0, sc=8) 00:06:19.315 Write completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Write completed with error (sct=0, sc=8) 00:06:19.315 Write completed with error (sct=0, sc=8) 00:06:19.315 Write completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Write completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Write completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Write completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Write completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Write completed with error (sct=0, sc=8) 00:06:19.315 Write completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Write completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 [2024-12-09 11:41:26.964233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff49c00d7c0 is same with the state(6) to be set 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Write completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Write completed with error (sct=0, sc=8) 00:06:19.315 Write completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Write completed with error (sct=0, sc=8) 00:06:19.315 Write completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Write completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Write completed with error (sct=0, sc=8) 00:06:19.315 Write completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Write completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Write completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Write completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 Read completed with error (sct=0, sc=8) 00:06:19.315 [2024-12-09 11:41:26.965195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff49c00d020 is same with the state(6) to be set 00:06:19.315 Initializing NVMe Controllers 00:06:19.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:19.315 Controller IO queue size 128, less than required. 00:06:19.315 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:19.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:19.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:19.315 Initialization complete. Launching workers. 00:06:19.315 ======================================================== 00:06:19.315 Latency(us) 00:06:19.315 Device Information : IOPS MiB/s Average min max 00:06:19.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.92 0.08 894186.53 234.46 1005489.75 00:06:19.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 175.90 0.09 921613.61 403.77 1009789.66 00:06:19.315 ======================================================== 00:06:19.315 Total : 345.82 0.17 908137.19 234.46 1009789.66 00:06:19.315 00:06:19.315 [2024-12-09 11:41:26.965968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12099b0 (9): Bad file descriptor 00:06:19.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:19.315 11:41:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.315 11:41:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:19.315 11:41:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4037762 00:06:19.315 11:41:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4037762 00:06:19.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4037762) - No such process 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4037762 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 4037762 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 4037762 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.885 [2024-12-09 11:41:27.495538] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4038473 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4038473 00:06:19.885 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:19.885 [2024-12-09 11:41:27.575543] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:20.145 11:41:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:20.145 11:41:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4038473 00:06:20.145 11:41:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:20.714 11:41:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:20.714 11:41:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4038473 00:06:20.714 11:41:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:21.282 11:41:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:21.282 11:41:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4038473 00:06:21.282 11:41:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:21.849 11:41:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:21.849 11:41:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4038473 00:06:21.849 11:41:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:22.419 11:41:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:22.419 11:41:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4038473 00:06:22.419 11:41:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:22.680 11:41:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:22.680 11:41:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4038473 00:06:22.680 11:41:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:22.940 Initializing NVMe Controllers 00:06:22.940 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:22.940 Controller IO queue size 128, less than required. 00:06:22.940 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:22.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:22.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:22.940 Initialization complete. Launching workers. 00:06:22.940 ======================================================== 00:06:22.940 Latency(us) 00:06:22.940 Device Information : IOPS MiB/s Average min max 00:06:22.940 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001978.91 1000186.53 1006024.38 00:06:22.940 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002602.77 1000240.34 1007650.98 00:06:22.940 ======================================================== 00:06:22.940 Total : 256.00 0.12 1002290.84 1000186.53 1007650.98 00:06:22.940 00:06:23.199 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:23.199 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4038473 00:06:23.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4038473) - No such process 00:06:23.199 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4038473 00:06:23.200 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:23.200 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:23.200 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:06:23.200 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # sync 00:06:23.200 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:06:23.200 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # set +e 00:06:23.200 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # for i in {1..20} 00:06:23.200 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:06:23.200 rmmod nvme_tcp 00:06:23.200 rmmod nvme_fabrics 00:06:23.460 rmmod nvme_keyring 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # set -e 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@130 -- # return 0 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 4037428 ']' 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 4037428 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 4037428 ']' 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 4037428 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4037428 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4037428' 00:06:23.460 killing process with pid 4037428 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 4037428 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 4037428 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # iptr 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # remove_spdk_ns 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:23.460 11:41:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.004 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:06:26.004 00:06:26.004 real 0m18.259s 00:06:26.004 user 0m30.893s 00:06:26.004 sys 0m6.804s 00:06:26.004 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.004 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.004 ************************************ 00:06:26.004 END TEST nvmf_delete_subsystem 00:06:26.004 ************************************ 00:06:26.004 11:41:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:26.004 11:41:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:26.004 11:41:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.004 11:41:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:26.004 ************************************ 00:06:26.004 START TEST nvmf_host_management 00:06:26.004 ************************************ 00:06:26.004 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:26.004 * Looking for test storage... 00:06:26.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:26.004 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.004 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:26.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.005 --rc genhtml_branch_coverage=1 00:06:26.005 --rc genhtml_function_coverage=1 00:06:26.005 --rc genhtml_legend=1 00:06:26.005 --rc geninfo_all_blocks=1 00:06:26.005 --rc geninfo_unexecuted_blocks=1 00:06:26.005 00:06:26.005 ' 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:26.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.005 --rc genhtml_branch_coverage=1 00:06:26.005 --rc genhtml_function_coverage=1 00:06:26.005 --rc genhtml_legend=1 00:06:26.005 --rc geninfo_all_blocks=1 00:06:26.005 --rc geninfo_unexecuted_blocks=1 00:06:26.005 00:06:26.005 ' 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:26.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.005 --rc genhtml_branch_coverage=1 00:06:26.005 --rc genhtml_function_coverage=1 00:06:26.005 --rc genhtml_legend=1 00:06:26.005 --rc geninfo_all_blocks=1 00:06:26.005 --rc geninfo_unexecuted_blocks=1 00:06:26.005 00:06:26.005 ' 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:26.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.005 --rc genhtml_branch_coverage=1 00:06:26.005 --rc genhtml_function_coverage=1 00:06:26.005 --rc genhtml_legend=1 00:06:26.005 --rc geninfo_all_blocks=1 00:06:26.005 --rc geninfo_unexecuted_blocks=1 00:06:26.005 00:06:26.005 ' 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # : 0 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:06:26.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@56 -- # have_pci_nics=0 00:06:26.005 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:26.006 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:26.006 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:26.006 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:06:26.006 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:26.006 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:26.006 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:26.006 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:26.006 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.006 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:26.006 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.006 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:26.006 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:26.006 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # xtrace_disable 00:06:26.006 11:41:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_devs=() 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_devs 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_net_devs=() 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # pci_drivers=() 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # local -A pci_drivers 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # net_devs=() 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga net_devs 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # e810=() 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga e810 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # x722=() 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga x722 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@323 -- # mlx=() 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@323 -- # local -ga mlx 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:06:34.152 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:34.152 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:34.153 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:34.153 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:34.153 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:06:34.153 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:06:34.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:34.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:06:34.153 00:06:34.153 --- 10.0.0.2 ping statistics --- 00:06:34.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.153 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:34.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:34.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:06:34.153 00:06:34.153 --- 10.0.0.1 ping statistics --- 00:06:34.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.153 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=4043498 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 4043498 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4043498 ']' 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.153 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.153 [2024-12-09 11:41:41.218689] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:06:34.153 [2024-12-09 11:41:41.218754] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.153 [2024-12-09 11:41:41.323269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.153 [2024-12-09 11:41:41.379233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:34.154 [2024-12-09 11:41:41.379292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:34.154 [2024-12-09 11:41:41.379300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:34.154 [2024-12-09 11:41:41.379308] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:34.154 [2024-12-09 11:41:41.379315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:34.154 [2024-12-09 11:41:41.381339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.154 [2024-12-09 11:41:41.381509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.154 [2024-12-09 11:41:41.381534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:34.154 [2024-12-09 11:41:41.381546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.154 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.154 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.415 [2024-12-09 11:41:42.082447] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.415 Malloc0 00:06:34.415 [2024-12-09 11:41:42.152836] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4043606 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4043606 /var/tmp/bdevperf.sock 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4043606 ']' 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:34.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:06:34.415 { 00:06:34.415 "params": { 00:06:34.415 "name": "Nvme$subsystem", 00:06:34.415 "trtype": "$TEST_TRANSPORT", 00:06:34.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:34.415 "adrfam": "ipv4", 00:06:34.415 "trsvcid": "$NVMF_PORT", 00:06:34.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:34.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:34.415 "hdgst": ${hdgst:-false}, 00:06:34.415 "ddgst": ${ddgst:-false} 00:06:34.415 }, 00:06:34.415 "method": "bdev_nvme_attach_controller" 00:06:34.415 } 00:06:34.415 EOF 00:06:34.415 )") 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:06:34.415 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:06:34.415 "params": { 00:06:34.415 "name": "Nvme0", 00:06:34.415 "trtype": "tcp", 00:06:34.415 "traddr": "10.0.0.2", 00:06:34.415 "adrfam": "ipv4", 00:06:34.415 "trsvcid": "4420", 00:06:34.415 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:34.415 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:34.415 "hdgst": false, 00:06:34.415 "ddgst": false 00:06:34.415 }, 00:06:34.415 "method": "bdev_nvme_attach_controller" 00:06:34.415 }' 00:06:34.415 [2024-12-09 11:41:42.255999] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:06:34.415 [2024-12-09 11:41:42.256051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4043606 ] 00:06:34.675 [2024-12-09 11:41:42.342838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.675 [2024-12-09 11:41:42.379695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.935 Running I/O for 10 seconds... 00:06:35.195 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.195 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:35.195 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:35.195 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.195 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.195 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.195 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:35.195 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:35.195 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:35.195 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:35.195 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:35.195 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:35.195 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:35.195 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:35.457 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:35.457 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:35.457 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.457 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.457 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.457 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:06:35.457 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:06:35.457 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:35.457 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:35.457 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:35.457 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:35.457 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.457 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.457 [2024-12-09 11:41:43.124986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.457 [2024-12-09 11:41:43.125067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.457 [2024-12-09 11:41:43.125075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.457 [2024-12-09 11:41:43.125082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.457 [2024-12-09 11:41:43.125091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.457 [2024-12-09 11:41:43.125098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.457 [2024-12-09 11:41:43.125105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.457 [2024-12-09 11:41:43.125112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.457 [2024-12-09 11:41:43.125119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.457 [2024-12-09 11:41:43.125126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.457 [2024-12-09 11:41:43.125132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.457 [2024-12-09 11:41:43.125139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.457 [2024-12-09 11:41:43.125145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.457 [2024-12-09 11:41:43.125152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.457 [2024-12-09 11:41:43.125159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.457 [2024-12-09 11:41:43.125165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.457 [2024-12-09 11:41:43.125172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.457 [2024-12-09 11:41:43.125178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3940 is same with the state(6) to be set 00:06:35.458 [2024-12-09 11:41:43.125673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.458 [2024-12-09 11:41:43.125712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.458 [2024-12-09 11:41:43.125731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.458 [2024-12-09 11:41:43.125739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.458 [2024-12-09 11:41:43.125749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.458 [2024-12-09 11:41:43.125757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.458 [2024-12-09 11:41:43.125767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.458 [2024-12-09 11:41:43.125774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.458 [2024-12-09 11:41:43.125784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.458 [2024-12-09 11:41:43.125792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.458 [2024-12-09 11:41:43.125801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.458 [2024-12-09 11:41:43.125809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.458 [2024-12-09 11:41:43.125819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.458 [2024-12-09 11:41:43.125827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.458 [2024-12-09 11:41:43.125837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.458 [2024-12-09 11:41:43.125844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.458 [2024-12-09 11:41:43.125854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.458 [2024-12-09 11:41:43.125861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.458 [2024-12-09 11:41:43.125875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.458 [2024-12-09 11:41:43.125883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.458 [2024-12-09 11:41:43.125892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.458 [2024-12-09 11:41:43.125900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.458 [2024-12-09 11:41:43.125909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.458 [2024-12-09 11:41:43.125916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.458 [2024-12-09 11:41:43.125926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.458 [2024-12-09 11:41:43.125933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.458 [2024-12-09 11:41:43.125942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.458 [2024-12-09 11:41:43.125950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.458 [2024-12-09 11:41:43.125959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.458 [2024-12-09 11:41:43.125966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.458 [2024-12-09 11:41:43.125975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.458 [2024-12-09 11:41:43.125982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.458 [2024-12-09 11:41:43.125992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.458 [2024-12-09 11:41:43.125999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.458 [2024-12-09 11:41:43.126009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.458 [2024-12-09 11:41:43.126016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.459 [2024-12-09 11:41:43.126583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.459 [2024-12-09 11:41:43.126592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.460 [2024-12-09 11:41:43.126600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.460 [2024-12-09 11:41:43.126610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.460 [2024-12-09 11:41:43.126617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.460 [2024-12-09 11:41:43.126626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.460 [2024-12-09 11:41:43.126633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.460 [2024-12-09 11:41:43.126648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.460 [2024-12-09 11:41:43.126655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.460 [2024-12-09 11:41:43.126665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.460 [2024-12-09 11:41:43.126673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.460 [2024-12-09 11:41:43.126682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.460 [2024-12-09 11:41:43.126689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.460 [2024-12-09 11:41:43.126699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.460 [2024-12-09 11:41:43.126706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.460 [2024-12-09 11:41:43.126716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.460 [2024-12-09 11:41:43.126723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.460 [2024-12-09 11:41:43.126734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.460 [2024-12-09 11:41:43.126741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.460 [2024-12-09 11:41:43.126751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.460 [2024-12-09 11:41:43.126758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.460 [2024-12-09 11:41:43.126768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.460 [2024-12-09 11:41:43.126776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.460 [2024-12-09 11:41:43.126786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.460 [2024-12-09 11:41:43.126793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.460 [2024-12-09 11:41:43.126802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.460 [2024-12-09 11:41:43.126810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.460 [2024-12-09 11:41:43.126819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x906af0 is same with the state(6) to be set 00:06:35.460 [2024-12-09 11:41:43.128079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:35.460 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.460 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:35.460 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.460 task offset: 106496 on job bdev=Nvme0n1 fails 00:06:35.460 00:06:35.460 Latency(us) 00:06:35.460 [2024-12-09T10:41:43.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:35.460 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:35.460 Job: Nvme0n1 ended in about 0.56 seconds with error 00:06:35.460 Verification LBA range: start 0x0 length 0x400 00:06:35.460 Nvme0n1 : 0.56 1489.47 93.09 114.57 0.00 38906.99 5379.41 36263.25 00:06:35.460 [2024-12-09T10:41:43.346Z] =================================================================================================================== 00:06:35.460 [2024-12-09T10:41:43.346Z] Total : 1489.47 93.09 114.57 0.00 38906.99 5379.41 36263.25 00:06:35.460 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.460 [2024-12-09 11:41:43.130092] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.460 [2024-12-09 11:41:43.130117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6edc20 (9): Bad file descriptor 00:06:35.460 [2024-12-09 11:41:43.135723] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:06:35.460 [2024-12-09 11:41:43.135823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:06:35.460 [2024-12-09 11:41:43.135855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.460 [2024-12-09 11:41:43.135873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:06:35.460 [2024-12-09 11:41:43.135890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:06:35.460 [2024-12-09 11:41:43.135899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:06:35.460 [2024-12-09 11:41:43.135906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6edc20 00:06:35.460 [2024-12-09 11:41:43.135927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6edc20 (9): Bad file descriptor 00:06:35.460 [2024-12-09 11:41:43.135940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:06:35.460 [2024-12-09 11:41:43.135948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:06:35.460 [2024-12-09 11:41:43.135957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:06:35.460 [2024-12-09 11:41:43.135966] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:06:35.460 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.460 11:41:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:36.405 11:41:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4043606 00:06:36.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4043606) - No such process 00:06:36.405 11:41:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:36.405 11:41:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:36.405 11:41:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:36.405 11:41:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:36.405 11:41:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:06:36.405 11:41:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:06:36.405 11:41:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:06:36.405 11:41:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:06:36.405 { 00:06:36.405 "params": { 00:06:36.405 "name": "Nvme$subsystem", 00:06:36.405 "trtype": "$TEST_TRANSPORT", 00:06:36.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:36.405 "adrfam": "ipv4", 00:06:36.405 "trsvcid": "$NVMF_PORT", 00:06:36.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:36.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:36.405 "hdgst": ${hdgst:-false}, 00:06:36.405 "ddgst": ${ddgst:-false} 00:06:36.405 }, 00:06:36.405 "method": "bdev_nvme_attach_controller" 00:06:36.405 } 00:06:36.405 EOF 00:06:36.405 )") 00:06:36.405 11:41:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:06:36.405 11:41:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:06:36.405 11:41:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:06:36.405 11:41:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:06:36.405 "params": { 00:06:36.405 "name": "Nvme0", 00:06:36.405 "trtype": "tcp", 00:06:36.405 "traddr": "10.0.0.2", 00:06:36.405 "adrfam": "ipv4", 00:06:36.405 "trsvcid": "4420", 00:06:36.405 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:36.405 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:36.405 "hdgst": false, 00:06:36.405 "ddgst": false 00:06:36.405 }, 00:06:36.405 "method": "bdev_nvme_attach_controller" 00:06:36.405 }' 00:06:36.405 [2024-12-09 11:41:44.201529] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:06:36.405 [2024-12-09 11:41:44.201584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4044075 ] 00:06:36.405 [2024-12-09 11:41:44.286821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.666 [2024-12-09 11:41:44.321931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.926 Running I/O for 1 seconds... 00:06:37.867 1600.00 IOPS, 100.00 MiB/s 00:06:37.867 Latency(us) 00:06:37.867 [2024-12-09T10:41:45.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:37.867 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:37.867 Verification LBA range: start 0x0 length 0x400 00:06:37.867 Nvme0n1 : 1.01 1652.06 103.25 0.00 0.00 38056.41 5816.32 34078.72 00:06:37.867 [2024-12-09T10:41:45.753Z] =================================================================================================================== 00:06:37.867 [2024-12-09T10:41:45.753Z] Total : 1652.06 103.25 0.00 0.00 38056.41 5816.32 34078.72 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # sync 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # set +e 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # for i in {1..20} 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:06:38.128 rmmod nvme_tcp 00:06:38.128 rmmod nvme_fabrics 00:06:38.128 rmmod nvme_keyring 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # set -e 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@130 -- # return 0 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 4043498 ']' 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 4043498 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 4043498 ']' 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 4043498 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4043498 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4043498' 00:06:38.128 killing process with pid 4043498 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 4043498 00:06:38.128 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 4043498 00:06:38.128 [2024-12-09 11:41:45.999714] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:38.388 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:06:38.388 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:06:38.388 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:06:38.388 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # iptr 00:06:38.388 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:06:38.389 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:06:38.389 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:06:38.389 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:38.389 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # remove_spdk_ns 00:06:38.389 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.389 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:38.389 11:41:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.304 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:06:40.304 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:40.304 00:06:40.304 real 0m14.661s 00:06:40.304 user 0m23.284s 00:06:40.304 sys 0m6.738s 00:06:40.304 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.304 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.304 ************************************ 00:06:40.304 END TEST nvmf_host_management 00:06:40.304 ************************************ 00:06:40.304 11:41:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:40.304 11:41:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:40.304 11:41:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.304 11:41:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:40.567 ************************************ 00:06:40.567 START TEST nvmf_lvol 00:06:40.567 ************************************ 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:40.567 * Looking for test storage... 00:06:40.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:40.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.567 --rc genhtml_branch_coverage=1 00:06:40.567 --rc genhtml_function_coverage=1 00:06:40.567 --rc genhtml_legend=1 00:06:40.567 --rc geninfo_all_blocks=1 00:06:40.567 --rc geninfo_unexecuted_blocks=1 00:06:40.567 00:06:40.567 ' 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:40.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.567 --rc genhtml_branch_coverage=1 00:06:40.567 --rc genhtml_function_coverage=1 00:06:40.567 --rc genhtml_legend=1 00:06:40.567 --rc geninfo_all_blocks=1 00:06:40.567 --rc geninfo_unexecuted_blocks=1 00:06:40.567 00:06:40.567 ' 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:40.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.567 --rc genhtml_branch_coverage=1 00:06:40.567 --rc genhtml_function_coverage=1 00:06:40.567 --rc genhtml_legend=1 00:06:40.567 --rc geninfo_all_blocks=1 00:06:40.567 --rc geninfo_unexecuted_blocks=1 00:06:40.567 00:06:40.567 ' 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:40.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.567 --rc genhtml_branch_coverage=1 00:06:40.567 --rc genhtml_function_coverage=1 00:06:40.567 --rc genhtml_legend=1 00:06:40.567 --rc geninfo_all_blocks=1 00:06:40.567 --rc geninfo_unexecuted_blocks=1 00:06:40.567 00:06:40.567 ' 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # : 0 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:06:40.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@56 -- # have_pci_nics=0 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # xtrace_disable 00:06:40.567 11:41:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_devs=() 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_devs 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_net_devs=() 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # pci_drivers=() 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # local -A pci_drivers 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # net_devs=() 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga net_devs 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # e810=() 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga e810 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # x722=() 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga x722 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@323 -- # mlx=() 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@323 -- # local -ga mlx 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:48.706 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:48.706 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:48.706 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:48.706 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:06:48.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:48.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:06:48.706 00:06:48.706 --- 10.0.0.2 ping statistics --- 00:06:48.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.706 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:06:48.706 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:48.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:48.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:06:48.706 00:06:48.706 --- 10.0.0.1 ping statistics --- 00:06:48.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.706 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:06:48.707 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:48.707 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:06:48.707 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:48.707 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:48.707 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:06:48.707 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:06:48.707 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:48.707 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:06:48.707 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:06:48.707 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:48.707 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:48.707 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:48.707 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:48.707 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=4048592 00:06:48.707 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 4048592 00:06:48.707 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:48.707 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 4048592 ']' 00:06:48.707 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.707 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.707 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.707 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.707 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:48.707 [2024-12-09 11:41:55.973717] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:06:48.707 [2024-12-09 11:41:55.973779] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.707 [2024-12-09 11:41:56.070651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.707 [2024-12-09 11:41:56.123134] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:48.707 [2024-12-09 11:41:56.123190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:48.707 [2024-12-09 11:41:56.123201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:48.707 [2024-12-09 11:41:56.123209] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:48.707 [2024-12-09 11:41:56.123215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:48.707 [2024-12-09 11:41:56.125093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.707 [2024-12-09 11:41:56.125226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.707 [2024-12-09 11:41:56.125227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.968 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.968 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:48.968 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:48.968 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:48.968 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:48.968 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:48.968 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:49.229 [2024-12-09 11:41:56.964602] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.229 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:49.490 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:49.490 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:49.751 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:49.751 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:49.751 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:50.012 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d72aa9a1-7b52-43e6-9ed8-3f74036a162a 00:06:50.012 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d72aa9a1-7b52-43e6-9ed8-3f74036a162a lvol 20 00:06:50.272 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f41dab7f-f40c-4de6-85d0-fcfcfdfef141 00:06:50.273 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:50.273 11:41:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f41dab7f-f40c-4de6-85d0-fcfcfdfef141 00:06:50.533 11:41:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:50.793 [2024-12-09 11:41:58.458062] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:50.794 11:41:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:50.794 11:41:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4049287 00:06:50.794 11:41:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:50.794 11:41:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:52.180 11:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f41dab7f-f40c-4de6-85d0-fcfcfdfef141 MY_SNAPSHOT 00:06:52.180 11:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=dec20eac-8efb-4eb1-ba58-40f4f00e833f 00:06:52.180 11:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f41dab7f-f40c-4de6-85d0-fcfcfdfef141 30 00:06:52.441 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone dec20eac-8efb-4eb1-ba58-40f4f00e833f MY_CLONE 00:06:52.441 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d87af31a-6c5c-4126-bcde-3ba0c026e381 00:06:52.701 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d87af31a-6c5c-4126-bcde-3ba0c026e381 00:06:52.961 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4049287 00:07:02.958 Initializing NVMe Controllers 00:07:02.958 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:02.958 Controller IO queue size 128, less than required. 00:07:02.958 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:02.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:02.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:02.958 Initialization complete. Launching workers. 00:07:02.958 ======================================================== 00:07:02.958 Latency(us) 00:07:02.958 Device Information : IOPS MiB/s Average min max 00:07:02.958 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16028.76 62.61 7989.81 1500.17 64291.38 00:07:02.958 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17133.55 66.93 7470.89 2152.32 49809.43 00:07:02.958 ======================================================== 00:07:02.958 Total : 33162.31 129.54 7721.71 1500.17 64291.38 00:07:02.958 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f41dab7f-f40c-4de6-85d0-fcfcfdfef141 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d72aa9a1-7b52-43e6-9ed8-3f74036a162a 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # sync 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # set +e 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # for i in {1..20} 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:07:02.958 rmmod nvme_tcp 00:07:02.958 rmmod nvme_fabrics 00:07:02.958 rmmod nvme_keyring 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # set -e 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@130 -- # return 0 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 4048592 ']' 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 4048592 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 4048592 ']' 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 4048592 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4048592 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4048592' 00:07:02.958 killing process with pid 4048592 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 4048592 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 4048592 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # iptr 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # remove_spdk_ns 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:02.958 11:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.344 11:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:07:04.344 00:07:04.344 real 0m23.738s 00:07:04.344 user 1m4.176s 00:07:04.344 sys 0m8.551s 00:07:04.344 11:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.344 11:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:04.344 ************************************ 00:07:04.344 END TEST nvmf_lvol 00:07:04.344 ************************************ 00:07:04.344 11:42:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:04.344 11:42:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:04.344 11:42:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.344 11:42:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:04.344 ************************************ 00:07:04.344 START TEST nvmf_lvs_grow 00:07:04.344 ************************************ 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:04.344 * Looking for test storage... 00:07:04.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:04.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.344 --rc genhtml_branch_coverage=1 00:07:04.344 --rc genhtml_function_coverage=1 00:07:04.344 --rc genhtml_legend=1 00:07:04.344 --rc geninfo_all_blocks=1 00:07:04.344 --rc geninfo_unexecuted_blocks=1 00:07:04.344 00:07:04.344 ' 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:04.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.344 --rc genhtml_branch_coverage=1 00:07:04.344 --rc genhtml_function_coverage=1 00:07:04.344 --rc genhtml_legend=1 00:07:04.344 --rc geninfo_all_blocks=1 00:07:04.344 --rc geninfo_unexecuted_blocks=1 00:07:04.344 00:07:04.344 ' 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:04.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.344 --rc genhtml_branch_coverage=1 00:07:04.344 --rc genhtml_function_coverage=1 00:07:04.344 --rc genhtml_legend=1 00:07:04.344 --rc geninfo_all_blocks=1 00:07:04.344 --rc geninfo_unexecuted_blocks=1 00:07:04.344 00:07:04.344 ' 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:04.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.344 --rc genhtml_branch_coverage=1 00:07:04.344 --rc genhtml_function_coverage=1 00:07:04.344 --rc genhtml_legend=1 00:07:04.344 --rc geninfo_all_blocks=1 00:07:04.344 --rc geninfo_unexecuted_blocks=1 00:07:04.344 00:07:04.344 ' 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.344 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.606 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:04.606 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:04.606 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.606 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.606 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:04.606 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:04.606 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:07:04.606 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:04.606 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:04.606 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.606 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.606 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.606 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.606 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # : 0 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:07:04.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@56 -- # have_pci_nics=0 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # xtrace_disable 00:07:04.607 11:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:12.756 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:12.756 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_devs=() 00:07:12.756 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_devs 00:07:12.756 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_net_devs=() 00:07:12.756 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:07:12.756 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # pci_drivers=() 00:07:12.756 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # local -A pci_drivers 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # net_devs=() 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga net_devs 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # e810=() 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga e810 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # x722=() 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga x722 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@323 -- # mlx=() 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@323 -- # local -ga mlx 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:12.757 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:12.757 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:12.757 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:12.757 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:07:12.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:12.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:07:12.757 00:07:12.757 --- 10.0.0.2 ping statistics --- 00:07:12.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.757 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:07:12.757 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:12.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:12.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:07:12.758 00:07:12.758 --- 10.0.0.1 ping statistics --- 00:07:12.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.758 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:07:12.758 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:12.758 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:07:12.758 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:12.758 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:12.758 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:12.758 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:12.758 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:12.758 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:12.758 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:12.758 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:12.758 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:12.758 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:12.758 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:12.758 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=4056233 00:07:12.758 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 4056233 00:07:12.758 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:12.758 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 4056233 ']' 00:07:12.758 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.758 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.758 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.758 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.758 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:12.758 [2024-12-09 11:42:19.670849] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:07:12.758 [2024-12-09 11:42:19.670908] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.758 [2024-12-09 11:42:19.763376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.758 [2024-12-09 11:42:19.799367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:12.758 [2024-12-09 11:42:19.799403] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:12.758 [2024-12-09 11:42:19.799411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:12.758 [2024-12-09 11:42:19.799418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:12.758 [2024-12-09 11:42:19.799424] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:12.758 [2024-12-09 11:42:19.799991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.758 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.758 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:12.758 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:12.758 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:12.758 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:12.758 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:12.758 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:13.020 [2024-12-09 11:42:20.667623] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.020 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:13.020 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.020 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.020 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:13.020 ************************************ 00:07:13.020 START TEST lvs_grow_clean 00:07:13.020 ************************************ 00:07:13.020 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:13.020 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:13.020 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:13.020 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:13.020 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:13.020 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:13.020 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:13.020 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:13.020 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:13.020 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:13.281 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:13.281 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:13.281 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=9cc3a37b-365e-47e4-8679-6ef9f84f66d0 00:07:13.281 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9cc3a37b-365e-47e4-8679-6ef9f84f66d0 00:07:13.281 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:13.543 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:13.543 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:13.543 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9cc3a37b-365e-47e4-8679-6ef9f84f66d0 lvol 150 00:07:13.805 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=22804be7-d9e5-4297-962e-873a377f4c87 00:07:13.805 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:13.805 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:13.805 [2024-12-09 11:42:21.621494] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:13.805 [2024-12-09 11:42:21.621571] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:13.805 true 00:07:13.805 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9cc3a37b-365e-47e4-8679-6ef9f84f66d0 00:07:13.805 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:14.067 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:14.067 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:14.329 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 22804be7-d9e5-4297-962e-873a377f4c87 00:07:14.329 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:14.591 [2024-12-09 11:42:22.339800] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.591 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:14.853 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4056941 00:07:14.853 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:14.853 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:14.854 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4056941 /var/tmp/bdevperf.sock 00:07:14.854 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 4056941 ']' 00:07:14.854 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:14.854 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.854 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:14.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:14.854 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.854 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:14.854 [2024-12-09 11:42:22.593918] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:07:14.854 [2024-12-09 11:42:22.593992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4056941 ] 00:07:14.854 [2024-12-09 11:42:22.685949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.854 [2024-12-09 11:42:22.738336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.799 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.799 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:15.799 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:15.799 Nvme0n1 00:07:16.061 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:16.061 [ 00:07:16.061 { 00:07:16.061 "name": "Nvme0n1", 00:07:16.061 "aliases": [ 00:07:16.061 "22804be7-d9e5-4297-962e-873a377f4c87" 00:07:16.061 ], 00:07:16.061 "product_name": "NVMe disk", 00:07:16.061 "block_size": 4096, 00:07:16.061 "num_blocks": 38912, 00:07:16.061 "uuid": "22804be7-d9e5-4297-962e-873a377f4c87", 00:07:16.061 "numa_id": 0, 00:07:16.061 "assigned_rate_limits": { 00:07:16.061 "rw_ios_per_sec": 0, 00:07:16.061 "rw_mbytes_per_sec": 0, 00:07:16.061 "r_mbytes_per_sec": 0, 00:07:16.061 "w_mbytes_per_sec": 0 00:07:16.061 }, 00:07:16.061 "claimed": false, 00:07:16.061 "zoned": false, 00:07:16.061 "supported_io_types": { 00:07:16.061 "read": true, 00:07:16.061 "write": true, 00:07:16.061 "unmap": true, 00:07:16.061 "flush": true, 00:07:16.061 "reset": true, 00:07:16.061 "nvme_admin": true, 00:07:16.061 "nvme_io": true, 00:07:16.061 "nvme_io_md": false, 00:07:16.061 "write_zeroes": true, 00:07:16.061 "zcopy": false, 00:07:16.061 "get_zone_info": false, 00:07:16.061 "zone_management": false, 00:07:16.061 "zone_append": false, 00:07:16.061 "compare": true, 00:07:16.061 "compare_and_write": true, 00:07:16.061 "abort": true, 00:07:16.061 "seek_hole": false, 00:07:16.061 "seek_data": false, 00:07:16.061 "copy": true, 00:07:16.061 "nvme_iov_md": false 00:07:16.061 }, 00:07:16.061 "memory_domains": [ 00:07:16.061 { 00:07:16.061 "dma_device_id": "system", 00:07:16.061 "dma_device_type": 1 00:07:16.061 } 00:07:16.061 ], 00:07:16.061 "driver_specific": { 00:07:16.061 "nvme": [ 00:07:16.061 { 00:07:16.061 "trid": { 00:07:16.061 "trtype": "TCP", 00:07:16.061 "adrfam": "IPv4", 00:07:16.061 "traddr": "10.0.0.2", 00:07:16.061 "trsvcid": "4420", 00:07:16.061 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:16.061 }, 00:07:16.061 "ctrlr_data": { 00:07:16.061 "cntlid": 1, 00:07:16.061 "vendor_id": "0x8086", 00:07:16.061 "model_number": "SPDK bdev Controller", 00:07:16.061 "serial_number": "SPDK0", 00:07:16.061 "firmware_revision": "25.01", 00:07:16.061 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:16.061 "oacs": { 00:07:16.061 "security": 0, 00:07:16.061 "format": 0, 00:07:16.061 "firmware": 0, 00:07:16.061 "ns_manage": 0 00:07:16.061 }, 00:07:16.061 "multi_ctrlr": true, 00:07:16.061 "ana_reporting": false 00:07:16.061 }, 00:07:16.061 "vs": { 00:07:16.061 "nvme_version": "1.3" 00:07:16.061 }, 00:07:16.061 "ns_data": { 00:07:16.061 "id": 1, 00:07:16.061 "can_share": true 00:07:16.061 } 00:07:16.061 } 00:07:16.061 ], 00:07:16.061 "mp_policy": "active_passive" 00:07:16.061 } 00:07:16.061 } 00:07:16.061 ] 00:07:16.061 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:16.061 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4057125 00:07:16.061 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:16.322 Running I/O for 10 seconds... 00:07:17.264 Latency(us) 00:07:17.264 [2024-12-09T10:42:25.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.264 Nvme0n1 : 1.00 25018.00 97.73 0.00 0.00 0.00 0.00 0.00 00:07:17.264 [2024-12-09T10:42:25.150Z] =================================================================================================================== 00:07:17.264 [2024-12-09T10:42:25.150Z] Total : 25018.00 97.73 0.00 0.00 0.00 0.00 0.00 00:07:17.264 00:07:18.206 11:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9cc3a37b-365e-47e4-8679-6ef9f84f66d0 00:07:18.206 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.206 Nvme0n1 : 2.00 25131.00 98.17 0.00 0.00 0.00 0.00 0.00 00:07:18.206 [2024-12-09T10:42:26.092Z] =================================================================================================================== 00:07:18.206 [2024-12-09T10:42:26.092Z] Total : 25131.00 98.17 0.00 0.00 0.00 0.00 0.00 00:07:18.206 00:07:18.206 true 00:07:18.206 11:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9cc3a37b-365e-47e4-8679-6ef9f84f66d0 00:07:18.206 11:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:18.466 11:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:18.466 11:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:18.466 11:42:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4057125 00:07:19.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.410 Nvme0n1 : 3.00 25206.67 98.46 0.00 0.00 0.00 0.00 0.00 00:07:19.410 [2024-12-09T10:42:27.296Z] =================================================================================================================== 00:07:19.410 [2024-12-09T10:42:27.296Z] Total : 25206.67 98.46 0.00 0.00 0.00 0.00 0.00 00:07:19.410 00:07:20.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.352 Nvme0n1 : 4.00 25244.75 98.61 0.00 0.00 0.00 0.00 0.00 00:07:20.352 [2024-12-09T10:42:28.238Z] =================================================================================================================== 00:07:20.352 [2024-12-09T10:42:28.238Z] Total : 25244.75 98.61 0.00 0.00 0.00 0.00 0.00 00:07:20.352 00:07:21.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.296 Nvme0n1 : 5.00 25273.40 98.72 0.00 0.00 0.00 0.00 0.00 00:07:21.296 [2024-12-09T10:42:29.182Z] =================================================================================================================== 00:07:21.296 [2024-12-09T10:42:29.182Z] Total : 25273.40 98.72 0.00 0.00 0.00 0.00 0.00 00:07:21.296 00:07:22.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.239 Nvme0n1 : 6.00 25285.17 98.77 0.00 0.00 0.00 0.00 0.00 00:07:22.239 [2024-12-09T10:42:30.125Z] =================================================================================================================== 00:07:22.239 [2024-12-09T10:42:30.125Z] Total : 25285.17 98.77 0.00 0.00 0.00 0.00 0.00 00:07:22.239 00:07:23.183 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.183 Nvme0n1 : 7.00 25293.14 98.80 0.00 0.00 0.00 0.00 0.00 00:07:23.183 [2024-12-09T10:42:31.069Z] =================================================================================================================== 00:07:23.183 [2024-12-09T10:42:31.069Z] Total : 25293.14 98.80 0.00 0.00 0.00 0.00 0.00 00:07:23.183 00:07:24.125 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.125 Nvme0n1 : 8.00 25307.25 98.86 0.00 0.00 0.00 0.00 0.00 00:07:24.125 [2024-12-09T10:42:32.011Z] =================================================================================================================== 00:07:24.125 [2024-12-09T10:42:32.011Z] Total : 25307.25 98.86 0.00 0.00 0.00 0.00 0.00 00:07:24.125 00:07:25.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.511 Nvme0n1 : 9.00 25318.33 98.90 0.00 0.00 0.00 0.00 0.00 00:07:25.511 [2024-12-09T10:42:33.397Z] =================================================================================================================== 00:07:25.511 [2024-12-09T10:42:33.397Z] Total : 25318.33 98.90 0.00 0.00 0.00 0.00 0.00 00:07:25.511 00:07:26.452 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.452 Nvme0n1 : 10.00 25327.30 98.93 0.00 0.00 0.00 0.00 0.00 00:07:26.452 [2024-12-09T10:42:34.338Z] =================================================================================================================== 00:07:26.452 [2024-12-09T10:42:34.338Z] Total : 25327.30 98.93 0.00 0.00 0.00 0.00 0.00 00:07:26.452 00:07:26.452 00:07:26.452 Latency(us) 00:07:26.452 [2024-12-09T10:42:34.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.452 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.452 Nvme0n1 : 10.00 25326.07 98.93 0.00 0.00 5050.52 2116.27 9448.11 00:07:26.452 [2024-12-09T10:42:34.338Z] =================================================================================================================== 00:07:26.452 [2024-12-09T10:42:34.338Z] Total : 25326.07 98.93 0.00 0.00 5050.52 2116.27 9448.11 00:07:26.452 { 00:07:26.452 "results": [ 00:07:26.452 { 00:07:26.452 "job": "Nvme0n1", 00:07:26.452 "core_mask": "0x2", 00:07:26.452 "workload": "randwrite", 00:07:26.452 "status": "finished", 00:07:26.452 "queue_depth": 128, 00:07:26.452 "io_size": 4096, 00:07:26.452 "runtime": 10.003053, 00:07:26.452 "iops": 25326.06795145442, 00:07:26.452 "mibps": 98.92995293536883, 00:07:26.452 "io_failed": 0, 00:07:26.452 "io_timeout": 0, 00:07:26.452 "avg_latency_us": 5050.520804932541, 00:07:26.452 "min_latency_us": 2116.266666666667, 00:07:26.452 "max_latency_us": 9448.106666666667 00:07:26.452 } 00:07:26.452 ], 00:07:26.452 "core_count": 1 00:07:26.452 } 00:07:26.452 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4056941 00:07:26.452 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 4056941 ']' 00:07:26.452 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 4056941 00:07:26.452 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:26.452 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.452 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4056941 00:07:26.452 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:26.452 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:26.452 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4056941' 00:07:26.452 killing process with pid 4056941 00:07:26.452 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 4056941 00:07:26.452 Received shutdown signal, test time was about 10.000000 seconds 00:07:26.452 00:07:26.452 Latency(us) 00:07:26.452 [2024-12-09T10:42:34.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.452 [2024-12-09T10:42:34.338Z] =================================================================================================================== 00:07:26.452 [2024-12-09T10:42:34.338Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:26.452 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 4056941 00:07:26.452 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:26.713 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:26.713 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9cc3a37b-365e-47e4-8679-6ef9f84f66d0 00:07:26.713 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:26.974 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:26.974 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:26.974 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:26.974 [2024-12-09 11:42:34.838150] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:27.235 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9cc3a37b-365e-47e4-8679-6ef9f84f66d0 00:07:27.235 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:27.235 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9cc3a37b-365e-47e4-8679-6ef9f84f66d0 00:07:27.235 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:27.235 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.235 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:27.235 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.235 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:27.235 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.235 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:27.235 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:27.235 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9cc3a37b-365e-47e4-8679-6ef9f84f66d0 00:07:27.235 request: 00:07:27.235 { 00:07:27.235 "uuid": "9cc3a37b-365e-47e4-8679-6ef9f84f66d0", 00:07:27.235 "method": "bdev_lvol_get_lvstores", 00:07:27.235 "req_id": 1 00:07:27.235 } 00:07:27.235 Got JSON-RPC error response 00:07:27.235 response: 00:07:27.235 { 00:07:27.235 "code": -19, 00:07:27.235 "message": "No such device" 00:07:27.235 } 00:07:27.235 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:27.235 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:27.235 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:27.235 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:27.235 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:27.503 aio_bdev 00:07:27.503 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 22804be7-d9e5-4297-962e-873a377f4c87 00:07:27.503 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=22804be7-d9e5-4297-962e-873a377f4c87 00:07:27.503 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:27.503 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:27.503 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:27.503 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:27.503 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:27.764 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 22804be7-d9e5-4297-962e-873a377f4c87 -t 2000 00:07:27.764 [ 00:07:27.764 { 00:07:27.764 "name": "22804be7-d9e5-4297-962e-873a377f4c87", 00:07:27.764 "aliases": [ 00:07:27.764 "lvs/lvol" 00:07:27.764 ], 00:07:27.764 "product_name": "Logical Volume", 00:07:27.764 "block_size": 4096, 00:07:27.764 "num_blocks": 38912, 00:07:27.764 "uuid": "22804be7-d9e5-4297-962e-873a377f4c87", 00:07:27.764 "assigned_rate_limits": { 00:07:27.764 "rw_ios_per_sec": 0, 00:07:27.764 "rw_mbytes_per_sec": 0, 00:07:27.764 "r_mbytes_per_sec": 0, 00:07:27.764 "w_mbytes_per_sec": 0 00:07:27.764 }, 00:07:27.764 "claimed": false, 00:07:27.764 "zoned": false, 00:07:27.764 "supported_io_types": { 00:07:27.764 "read": true, 00:07:27.764 "write": true, 00:07:27.764 "unmap": true, 00:07:27.764 "flush": false, 00:07:27.764 "reset": true, 00:07:27.764 "nvme_admin": false, 00:07:27.764 "nvme_io": false, 00:07:27.764 "nvme_io_md": false, 00:07:27.764 "write_zeroes": true, 00:07:27.764 "zcopy": false, 00:07:27.764 "get_zone_info": false, 00:07:27.764 "zone_management": false, 00:07:27.764 "zone_append": false, 00:07:27.764 "compare": false, 00:07:27.764 "compare_and_write": false, 00:07:27.764 "abort": false, 00:07:27.764 "seek_hole": true, 00:07:27.764 "seek_data": true, 00:07:27.764 "copy": false, 00:07:27.764 "nvme_iov_md": false 00:07:27.764 }, 00:07:27.764 "driver_specific": { 00:07:27.764 "lvol": { 00:07:27.764 "lvol_store_uuid": "9cc3a37b-365e-47e4-8679-6ef9f84f66d0", 00:07:27.764 "base_bdev": "aio_bdev", 00:07:27.764 "thin_provision": false, 00:07:27.764 "num_allocated_clusters": 38, 00:07:27.764 "snapshot": false, 00:07:27.764 "clone": false, 00:07:27.764 "esnap_clone": false 00:07:27.764 } 00:07:27.764 } 00:07:27.764 } 00:07:27.764 ] 00:07:27.764 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:27.764 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9cc3a37b-365e-47e4-8679-6ef9f84f66d0 00:07:27.764 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:28.025 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:28.025 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9cc3a37b-365e-47e4-8679-6ef9f84f66d0 00:07:28.025 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:28.025 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:28.025 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 22804be7-d9e5-4297-962e-873a377f4c87 00:07:28.285 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9cc3a37b-365e-47e4-8679-6ef9f84f66d0 00:07:28.546 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:28.546 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:28.807 00:07:28.807 real 0m15.710s 00:07:28.807 user 0m15.437s 00:07:28.807 sys 0m1.405s 00:07:28.807 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.807 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:28.807 ************************************ 00:07:28.807 END TEST lvs_grow_clean 00:07:28.807 ************************************ 00:07:28.807 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:28.807 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:28.807 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.807 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:28.807 ************************************ 00:07:28.807 START TEST lvs_grow_dirty 00:07:28.807 ************************************ 00:07:28.807 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:28.807 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:28.807 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:28.807 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:28.807 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:28.807 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:28.807 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:28.807 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:28.807 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:28.807 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:29.068 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:29.068 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:29.068 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9648b031-be2c-45c0-98ea-a702a3b9436c 00:07:29.068 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9648b031-be2c-45c0-98ea-a702a3b9436c 00:07:29.068 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:29.329 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:29.329 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:29.329 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9648b031-be2c-45c0-98ea-a702a3b9436c lvol 150 00:07:29.591 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a6f64f2b-d9b7-4e1a-ac1b-258431747741 00:07:29.591 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:29.591 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:29.591 [2024-12-09 11:42:37.374302] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:29.591 [2024-12-09 11:42:37.374345] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:29.591 true 00:07:29.591 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9648b031-be2c-45c0-98ea-a702a3b9436c 00:07:29.591 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:29.853 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:29.853 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:29.853 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a6f64f2b-d9b7-4e1a-ac1b-258431747741 00:07:30.113 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:30.374 [2024-12-09 11:42:38.048288] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.374 11:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:30.374 11:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4060037 00:07:30.374 11:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:30.374 11:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:30.374 11:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4060037 /var/tmp/bdevperf.sock 00:07:30.374 11:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 4060037 ']' 00:07:30.374 11:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:30.374 11:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.374 11:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:30.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:30.374 11:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.374 11:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:30.635 [2024-12-09 11:42:38.283014] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:07:30.635 [2024-12-09 11:42:38.283072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4060037 ] 00:07:30.635 [2024-12-09 11:42:38.368297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.635 [2024-12-09 11:42:38.398052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.207 11:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.207 11:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:31.207 11:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:31.468 Nvme0n1 00:07:31.468 11:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:31.730 [ 00:07:31.730 { 00:07:31.730 "name": "Nvme0n1", 00:07:31.730 "aliases": [ 00:07:31.730 "a6f64f2b-d9b7-4e1a-ac1b-258431747741" 00:07:31.730 ], 00:07:31.730 "product_name": "NVMe disk", 00:07:31.730 "block_size": 4096, 00:07:31.730 "num_blocks": 38912, 00:07:31.730 "uuid": "a6f64f2b-d9b7-4e1a-ac1b-258431747741", 00:07:31.730 "numa_id": 0, 00:07:31.730 "assigned_rate_limits": { 00:07:31.730 "rw_ios_per_sec": 0, 00:07:31.730 "rw_mbytes_per_sec": 0, 00:07:31.730 "r_mbytes_per_sec": 0, 00:07:31.730 "w_mbytes_per_sec": 0 00:07:31.730 }, 00:07:31.730 "claimed": false, 00:07:31.730 "zoned": false, 00:07:31.730 "supported_io_types": { 00:07:31.730 "read": true, 00:07:31.730 "write": true, 00:07:31.730 "unmap": true, 00:07:31.730 "flush": true, 00:07:31.730 "reset": true, 00:07:31.730 "nvme_admin": true, 00:07:31.730 "nvme_io": true, 00:07:31.730 "nvme_io_md": false, 00:07:31.730 "write_zeroes": true, 00:07:31.730 "zcopy": false, 00:07:31.730 "get_zone_info": false, 00:07:31.730 "zone_management": false, 00:07:31.730 "zone_append": false, 00:07:31.730 "compare": true, 00:07:31.730 "compare_and_write": true, 00:07:31.730 "abort": true, 00:07:31.730 "seek_hole": false, 00:07:31.730 "seek_data": false, 00:07:31.730 "copy": true, 00:07:31.730 "nvme_iov_md": false 00:07:31.730 }, 00:07:31.730 "memory_domains": [ 00:07:31.730 { 00:07:31.730 "dma_device_id": "system", 00:07:31.730 "dma_device_type": 1 00:07:31.730 } 00:07:31.730 ], 00:07:31.730 "driver_specific": { 00:07:31.730 "nvme": [ 00:07:31.730 { 00:07:31.730 "trid": { 00:07:31.730 "trtype": "TCP", 00:07:31.730 "adrfam": "IPv4", 00:07:31.730 "traddr": "10.0.0.2", 00:07:31.730 "trsvcid": "4420", 00:07:31.730 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:31.730 }, 00:07:31.730 "ctrlr_data": { 00:07:31.730 "cntlid": 1, 00:07:31.730 "vendor_id": "0x8086", 00:07:31.730 "model_number": "SPDK bdev Controller", 00:07:31.730 "serial_number": "SPDK0", 00:07:31.730 "firmware_revision": "25.01", 00:07:31.730 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:31.730 "oacs": { 00:07:31.730 "security": 0, 00:07:31.730 "format": 0, 00:07:31.730 "firmware": 0, 00:07:31.730 "ns_manage": 0 00:07:31.730 }, 00:07:31.730 "multi_ctrlr": true, 00:07:31.730 "ana_reporting": false 00:07:31.730 }, 00:07:31.730 "vs": { 00:07:31.730 "nvme_version": "1.3" 00:07:31.730 }, 00:07:31.730 "ns_data": { 00:07:31.730 "id": 1, 00:07:31.730 "can_share": true 00:07:31.730 } 00:07:31.730 } 00:07:31.730 ], 00:07:31.730 "mp_policy": "active_passive" 00:07:31.730 } 00:07:31.730 } 00:07:31.730 ] 00:07:31.730 11:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:31.730 11:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4060315 00:07:31.730 11:42:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:31.730 Running I/O for 10 seconds... 00:07:32.672 Latency(us) 00:07:32.672 [2024-12-09T10:42:40.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.672 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.672 Nvme0n1 : 1.00 24920.00 97.34 0.00 0.00 0.00 0.00 0.00 00:07:32.672 [2024-12-09T10:42:40.558Z] =================================================================================================================== 00:07:32.672 [2024-12-09T10:42:40.558Z] Total : 24920.00 97.34 0.00 0.00 0.00 0.00 0.00 00:07:32.672 00:07:33.613 11:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9648b031-be2c-45c0-98ea-a702a3b9436c 00:07:33.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.874 Nvme0n1 : 2.00 25067.50 97.92 0.00 0.00 0.00 0.00 0.00 00:07:33.874 [2024-12-09T10:42:41.760Z] =================================================================================================================== 00:07:33.874 [2024-12-09T10:42:41.760Z] Total : 25067.50 97.92 0.00 0.00 0.00 0.00 0.00 00:07:33.874 00:07:33.874 true 00:07:33.874 11:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9648b031-be2c-45c0-98ea-a702a3b9436c 00:07:33.874 11:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:34.138 11:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:34.138 11:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:34.138 11:42:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4060315 00:07:34.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.711 Nvme0n1 : 3.00 25138.67 98.20 0.00 0.00 0.00 0.00 0.00 00:07:34.711 [2024-12-09T10:42:42.597Z] =================================================================================================================== 00:07:34.711 [2024-12-09T10:42:42.597Z] Total : 25138.67 98.20 0.00 0.00 0.00 0.00 0.00 00:07:34.711 00:07:36.094 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.094 Nvme0n1 : 4.00 25189.25 98.40 0.00 0.00 0.00 0.00 0.00 00:07:36.094 [2024-12-09T10:42:43.980Z] =================================================================================================================== 00:07:36.094 [2024-12-09T10:42:43.980Z] Total : 25189.25 98.40 0.00 0.00 0.00 0.00 0.00 00:07:36.094 00:07:36.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.667 Nvme0n1 : 5.00 25220.00 98.52 0.00 0.00 0.00 0.00 0.00 00:07:36.667 [2024-12-09T10:42:44.553Z] =================================================================================================================== 00:07:36.667 [2024-12-09T10:42:44.553Z] Total : 25220.00 98.52 0.00 0.00 0.00 0.00 0.00 00:07:36.667 00:07:38.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.053 Nvme0n1 : 6.00 25251.00 98.64 0.00 0.00 0.00 0.00 0.00 00:07:38.053 [2024-12-09T10:42:45.939Z] =================================================================================================================== 00:07:38.053 [2024-12-09T10:42:45.939Z] Total : 25251.00 98.64 0.00 0.00 0.00 0.00 0.00 00:07:38.053 00:07:38.996 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.996 Nvme0n1 : 7.00 25273.57 98.72 0.00 0.00 0.00 0.00 0.00 00:07:38.996 [2024-12-09T10:42:46.882Z] =================================================================================================================== 00:07:38.996 [2024-12-09T10:42:46.882Z] Total : 25273.57 98.72 0.00 0.00 0.00 0.00 0.00 00:07:38.996 00:07:39.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.938 Nvme0n1 : 8.00 25290.38 98.79 0.00 0.00 0.00 0.00 0.00 00:07:39.938 [2024-12-09T10:42:47.824Z] =================================================================================================================== 00:07:39.938 [2024-12-09T10:42:47.824Z] Total : 25290.38 98.79 0.00 0.00 0.00 0.00 0.00 00:07:39.938 00:07:40.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.880 Nvme0n1 : 9.00 25303.44 98.84 0.00 0.00 0.00 0.00 0.00 00:07:40.880 [2024-12-09T10:42:48.766Z] =================================================================================================================== 00:07:40.880 [2024-12-09T10:42:48.766Z] Total : 25303.44 98.84 0.00 0.00 0.00 0.00 0.00 00:07:40.880 00:07:41.821 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.821 Nvme0n1 : 10.00 25320.20 98.91 0.00 0.00 0.00 0.00 0.00 00:07:41.821 [2024-12-09T10:42:49.707Z] =================================================================================================================== 00:07:41.821 [2024-12-09T10:42:49.707Z] Total : 25320.20 98.91 0.00 0.00 0.00 0.00 0.00 00:07:41.821 00:07:41.821 00:07:41.821 Latency(us) 00:07:41.821 [2024-12-09T10:42:49.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.821 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.821 Nvme0n1 : 10.00 25321.97 98.91 0.00 0.00 5051.69 3194.88 9557.33 00:07:41.821 [2024-12-09T10:42:49.707Z] =================================================================================================================== 00:07:41.821 [2024-12-09T10:42:49.707Z] Total : 25321.97 98.91 0.00 0.00 5051.69 3194.88 9557.33 00:07:41.821 { 00:07:41.821 "results": [ 00:07:41.821 { 00:07:41.821 "job": "Nvme0n1", 00:07:41.821 "core_mask": "0x2", 00:07:41.821 "workload": "randwrite", 00:07:41.821 "status": "finished", 00:07:41.821 "queue_depth": 128, 00:07:41.821 "io_size": 4096, 00:07:41.821 "runtime": 10.004354, 00:07:41.821 "iops": 25321.974812166784, 00:07:41.821 "mibps": 98.9139641100265, 00:07:41.821 "io_failed": 0, 00:07:41.821 "io_timeout": 0, 00:07:41.821 "avg_latency_us": 5051.693235121515, 00:07:41.821 "min_latency_us": 3194.88, 00:07:41.821 "max_latency_us": 9557.333333333334 00:07:41.821 } 00:07:41.821 ], 00:07:41.821 "core_count": 1 00:07:41.821 } 00:07:41.821 11:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4060037 00:07:41.821 11:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 4060037 ']' 00:07:41.821 11:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 4060037 00:07:41.821 11:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:41.821 11:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.821 11:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4060037 00:07:41.821 11:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:41.821 11:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:41.821 11:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4060037' 00:07:41.821 killing process with pid 4060037 00:07:41.821 11:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 4060037 00:07:41.821 Received shutdown signal, test time was about 10.000000 seconds 00:07:41.821 00:07:41.821 Latency(us) 00:07:41.821 [2024-12-09T10:42:49.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.821 [2024-12-09T10:42:49.707Z] =================================================================================================================== 00:07:41.821 [2024-12-09T10:42:49.707Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:41.821 11:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 4060037 00:07:42.082 11:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:42.082 11:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:42.344 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9648b031-be2c-45c0-98ea-a702a3b9436c 00:07:42.344 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:42.605 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:42.605 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:42.605 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4056233 00:07:42.605 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4056233 00:07:42.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4056233 Killed "${NVMF_APP[@]}" "$@" 00:07:42.605 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:42.605 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:42.605 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:42.605 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:42.605 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:42.605 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=4062409 00:07:42.605 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 4062409 00:07:42.605 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:42.605 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 4062409 ']' 00:07:42.605 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.605 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.605 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.605 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.605 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:42.605 [2024-12-09 11:42:50.423672] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:07:42.605 [2024-12-09 11:42:50.423724] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.866 [2024-12-09 11:42:50.491409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.866 [2024-12-09 11:42:50.519808] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.866 [2024-12-09 11:42:50.519839] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.866 [2024-12-09 11:42:50.519844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.866 [2024-12-09 11:42:50.519849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.866 [2024-12-09 11:42:50.519853] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.866 [2024-12-09 11:42:50.520322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.866 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.866 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:42.866 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:42.866 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:42.866 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:42.866 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.866 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:43.126 [2024-12-09 11:42:50.805330] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:43.126 [2024-12-09 11:42:50.805410] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:43.126 [2024-12-09 11:42:50.805433] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:43.126 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:43.126 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a6f64f2b-d9b7-4e1a-ac1b-258431747741 00:07:43.126 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a6f64f2b-d9b7-4e1a-ac1b-258431747741 00:07:43.126 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:43.126 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:43.126 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:43.126 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:43.126 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:43.126 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a6f64f2b-d9b7-4e1a-ac1b-258431747741 -t 2000 00:07:43.386 [ 00:07:43.386 { 00:07:43.386 "name": "a6f64f2b-d9b7-4e1a-ac1b-258431747741", 00:07:43.386 "aliases": [ 00:07:43.386 "lvs/lvol" 00:07:43.386 ], 00:07:43.386 "product_name": "Logical Volume", 00:07:43.386 "block_size": 4096, 00:07:43.386 "num_blocks": 38912, 00:07:43.386 "uuid": "a6f64f2b-d9b7-4e1a-ac1b-258431747741", 00:07:43.386 "assigned_rate_limits": { 00:07:43.386 "rw_ios_per_sec": 0, 00:07:43.386 "rw_mbytes_per_sec": 0, 00:07:43.386 "r_mbytes_per_sec": 0, 00:07:43.386 "w_mbytes_per_sec": 0 00:07:43.386 }, 00:07:43.386 "claimed": false, 00:07:43.386 "zoned": false, 00:07:43.386 "supported_io_types": { 00:07:43.386 "read": true, 00:07:43.386 "write": true, 00:07:43.386 "unmap": true, 00:07:43.386 "flush": false, 00:07:43.386 "reset": true, 00:07:43.386 "nvme_admin": false, 00:07:43.386 "nvme_io": false, 00:07:43.386 "nvme_io_md": false, 00:07:43.386 "write_zeroes": true, 00:07:43.386 "zcopy": false, 00:07:43.386 "get_zone_info": false, 00:07:43.386 "zone_management": false, 00:07:43.386 "zone_append": false, 00:07:43.386 "compare": false, 00:07:43.386 "compare_and_write": false, 00:07:43.386 "abort": false, 00:07:43.386 "seek_hole": true, 00:07:43.386 "seek_data": true, 00:07:43.386 "copy": false, 00:07:43.386 "nvme_iov_md": false 00:07:43.386 }, 00:07:43.386 "driver_specific": { 00:07:43.386 "lvol": { 00:07:43.386 "lvol_store_uuid": "9648b031-be2c-45c0-98ea-a702a3b9436c", 00:07:43.386 "base_bdev": "aio_bdev", 00:07:43.386 "thin_provision": false, 00:07:43.386 "num_allocated_clusters": 38, 00:07:43.386 "snapshot": false, 00:07:43.386 "clone": false, 00:07:43.386 "esnap_clone": false 00:07:43.386 } 00:07:43.386 } 00:07:43.386 } 00:07:43.386 ] 00:07:43.386 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:43.386 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9648b031-be2c-45c0-98ea-a702a3b9436c 00:07:43.386 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:43.647 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:43.647 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9648b031-be2c-45c0-98ea-a702a3b9436c 00:07:43.647 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:43.647 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:43.647 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:43.907 [2024-12-09 11:42:51.637896] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:43.907 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9648b031-be2c-45c0-98ea-a702a3b9436c 00:07:43.907 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:43.907 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9648b031-be2c-45c0-98ea-a702a3b9436c 00:07:43.907 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:43.907 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.907 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:43.907 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.907 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:43.907 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.907 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:43.907 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:43.907 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9648b031-be2c-45c0-98ea-a702a3b9436c 00:07:44.167 request: 00:07:44.167 { 00:07:44.167 "uuid": "9648b031-be2c-45c0-98ea-a702a3b9436c", 00:07:44.167 "method": "bdev_lvol_get_lvstores", 00:07:44.167 "req_id": 1 00:07:44.167 } 00:07:44.167 Got JSON-RPC error response 00:07:44.167 response: 00:07:44.167 { 00:07:44.167 "code": -19, 00:07:44.167 "message": "No such device" 00:07:44.167 } 00:07:44.167 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:44.167 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:44.167 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:44.167 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:44.167 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:44.167 aio_bdev 00:07:44.167 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a6f64f2b-d9b7-4e1a-ac1b-258431747741 00:07:44.167 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a6f64f2b-d9b7-4e1a-ac1b-258431747741 00:07:44.167 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:44.167 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:44.167 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:44.167 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:44.167 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:44.427 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a6f64f2b-d9b7-4e1a-ac1b-258431747741 -t 2000 00:07:44.687 [ 00:07:44.687 { 00:07:44.687 "name": "a6f64f2b-d9b7-4e1a-ac1b-258431747741", 00:07:44.687 "aliases": [ 00:07:44.687 "lvs/lvol" 00:07:44.687 ], 00:07:44.687 "product_name": "Logical Volume", 00:07:44.687 "block_size": 4096, 00:07:44.687 "num_blocks": 38912, 00:07:44.687 "uuid": "a6f64f2b-d9b7-4e1a-ac1b-258431747741", 00:07:44.687 "assigned_rate_limits": { 00:07:44.687 "rw_ios_per_sec": 0, 00:07:44.687 "rw_mbytes_per_sec": 0, 00:07:44.688 "r_mbytes_per_sec": 0, 00:07:44.688 "w_mbytes_per_sec": 0 00:07:44.688 }, 00:07:44.688 "claimed": false, 00:07:44.688 "zoned": false, 00:07:44.688 "supported_io_types": { 00:07:44.688 "read": true, 00:07:44.688 "write": true, 00:07:44.688 "unmap": true, 00:07:44.688 "flush": false, 00:07:44.688 "reset": true, 00:07:44.688 "nvme_admin": false, 00:07:44.688 "nvme_io": false, 00:07:44.688 "nvme_io_md": false, 00:07:44.688 "write_zeroes": true, 00:07:44.688 "zcopy": false, 00:07:44.688 "get_zone_info": false, 00:07:44.688 "zone_management": false, 00:07:44.688 "zone_append": false, 00:07:44.688 "compare": false, 00:07:44.688 "compare_and_write": false, 00:07:44.688 "abort": false, 00:07:44.688 "seek_hole": true, 00:07:44.688 "seek_data": true, 00:07:44.688 "copy": false, 00:07:44.688 "nvme_iov_md": false 00:07:44.688 }, 00:07:44.688 "driver_specific": { 00:07:44.688 "lvol": { 00:07:44.688 "lvol_store_uuid": "9648b031-be2c-45c0-98ea-a702a3b9436c", 00:07:44.688 "base_bdev": "aio_bdev", 00:07:44.688 "thin_provision": false, 00:07:44.688 "num_allocated_clusters": 38, 00:07:44.688 "snapshot": false, 00:07:44.688 "clone": false, 00:07:44.688 "esnap_clone": false 00:07:44.688 } 00:07:44.688 } 00:07:44.688 } 00:07:44.688 ] 00:07:44.688 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:44.688 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9648b031-be2c-45c0-98ea-a702a3b9436c 00:07:44.688 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:44.688 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:44.688 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9648b031-be2c-45c0-98ea-a702a3b9436c 00:07:44.688 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:44.948 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:44.948 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a6f64f2b-d9b7-4e1a-ac1b-258431747741 00:07:44.948 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9648b031-be2c-45c0-98ea-a702a3b9436c 00:07:45.208 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:45.469 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:45.469 00:07:45.469 real 0m16.696s 00:07:45.469 user 0m45.337s 00:07:45.469 sys 0m2.902s 00:07:45.469 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.469 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:45.469 ************************************ 00:07:45.469 END TEST lvs_grow_dirty 00:07:45.469 ************************************ 00:07:45.469 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:45.469 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:45.469 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:45.469 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:45.469 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:45.469 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:45.469 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:45.469 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:45.469 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:45.469 nvmf_trace.0 00:07:45.469 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:45.469 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:45.469 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:45.469 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # sync 00:07:45.469 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:07:45.469 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # set +e 00:07:45.469 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # for i in {1..20} 00:07:45.469 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:07:45.469 rmmod nvme_tcp 00:07:45.469 rmmod nvme_fabrics 00:07:45.729 rmmod nvme_keyring 00:07:45.729 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:07:45.729 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # set -e 00:07:45.729 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@130 -- # return 0 00:07:45.729 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 4062409 ']' 00:07:45.729 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 4062409 00:07:45.729 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 4062409 ']' 00:07:45.729 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 4062409 00:07:45.729 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:45.729 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.729 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4062409 00:07:45.729 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.729 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.729 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4062409' 00:07:45.729 killing process with pid 4062409 00:07:45.729 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 4062409 00:07:45.729 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 4062409 00:07:45.729 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:45.729 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:45.729 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:45.730 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # iptr 00:07:45.730 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:07:45.730 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:45.730 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:07:45.730 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:45.730 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # remove_spdk_ns 00:07:45.730 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.730 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.730 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:07:48.276 00:07:48.276 real 0m43.630s 00:07:48.276 user 1m6.455s 00:07:48.276 sys 0m10.299s 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:48.276 ************************************ 00:07:48.276 END TEST nvmf_lvs_grow 00:07:48.276 ************************************ 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:48.276 ************************************ 00:07:48.276 START TEST nvmf_bdev_io_wait 00:07:48.276 ************************************ 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:48.276 * Looking for test storage... 00:07:48.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:48.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.276 --rc genhtml_branch_coverage=1 00:07:48.276 --rc genhtml_function_coverage=1 00:07:48.276 --rc genhtml_legend=1 00:07:48.276 --rc geninfo_all_blocks=1 00:07:48.276 --rc geninfo_unexecuted_blocks=1 00:07:48.276 00:07:48.276 ' 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:48.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.276 --rc genhtml_branch_coverage=1 00:07:48.276 --rc genhtml_function_coverage=1 00:07:48.276 --rc genhtml_legend=1 00:07:48.276 --rc geninfo_all_blocks=1 00:07:48.276 --rc geninfo_unexecuted_blocks=1 00:07:48.276 00:07:48.276 ' 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:48.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.276 --rc genhtml_branch_coverage=1 00:07:48.276 --rc genhtml_function_coverage=1 00:07:48.276 --rc genhtml_legend=1 00:07:48.276 --rc geninfo_all_blocks=1 00:07:48.276 --rc geninfo_unexecuted_blocks=1 00:07:48.276 00:07:48.276 ' 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:48.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.276 --rc genhtml_branch_coverage=1 00:07:48.276 --rc genhtml_function_coverage=1 00:07:48.276 --rc genhtml_legend=1 00:07:48.276 --rc geninfo_all_blocks=1 00:07:48.276 --rc geninfo_unexecuted_blocks=1 00:07:48.276 00:07:48.276 ' 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.276 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # : 0 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:07:48.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@56 -- # have_pci_nics=0 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # xtrace_disable 00:07:48.277 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:56.421 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:56.421 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_devs=() 00:07:56.421 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_devs 00:07:56.421 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_net_devs=() 00:07:56.421 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # pci_drivers=() 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # local -A pci_drivers 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # net_devs=() 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga net_devs 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # e810=() 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga e810 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # x722=() 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga x722 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # mlx=() 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # local -ga mlx 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:56.422 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:56.422 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:56.422 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:56.422 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:07:56.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:07:56.422 00:07:56.422 --- 10.0.0.2 ping statistics --- 00:07:56.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.422 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:56.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:07:56.422 00:07:56.422 --- 10.0.0.1 ping statistics --- 00:07:56.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.422 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.422 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:56.423 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:56.423 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:56.423 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:56.423 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:56.423 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:56.423 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=4067406 00:07:56.423 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 4067406 00:07:56.423 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:56.423 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 4067406 ']' 00:07:56.423 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.423 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.423 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.423 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.423 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:56.423 [2024-12-09 11:43:03.515608] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:07:56.423 [2024-12-09 11:43:03.515689] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.423 [2024-12-09 11:43:03.615910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:56.423 [2024-12-09 11:43:03.671506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.423 [2024-12-09 11:43:03.671561] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.423 [2024-12-09 11:43:03.671570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.423 [2024-12-09 11:43:03.671577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.423 [2024-12-09 11:43:03.671583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.423 [2024-12-09 11:43:03.673665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.423 [2024-12-09 11:43:03.673753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.423 [2024-12-09 11:43:03.674098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.423 [2024-12-09 11:43:03.674100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:56.684 [2024-12-09 11:43:04.430409] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:56.684 Malloc0 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:56.684 [2024-12-09 11:43:04.489578] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.684 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4067503 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4067505 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:56.685 { 00:07:56.685 "params": { 00:07:56.685 "name": "Nvme$subsystem", 00:07:56.685 "trtype": "$TEST_TRANSPORT", 00:07:56.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:56.685 "adrfam": "ipv4", 00:07:56.685 "trsvcid": "$NVMF_PORT", 00:07:56.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:56.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:56.685 "hdgst": ${hdgst:-false}, 00:07:56.685 "ddgst": ${ddgst:-false} 00:07:56.685 }, 00:07:56.685 "method": "bdev_nvme_attach_controller" 00:07:56.685 } 00:07:56.685 EOF 00:07:56.685 )") 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4067507 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:56.685 { 00:07:56.685 "params": { 00:07:56.685 "name": "Nvme$subsystem", 00:07:56.685 "trtype": "$TEST_TRANSPORT", 00:07:56.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:56.685 "adrfam": "ipv4", 00:07:56.685 "trsvcid": "$NVMF_PORT", 00:07:56.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:56.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:56.685 "hdgst": ${hdgst:-false}, 00:07:56.685 "ddgst": ${ddgst:-false} 00:07:56.685 }, 00:07:56.685 "method": "bdev_nvme_attach_controller" 00:07:56.685 } 00:07:56.685 EOF 00:07:56.685 )") 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4067510 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:56.685 { 00:07:56.685 "params": { 00:07:56.685 "name": "Nvme$subsystem", 00:07:56.685 "trtype": "$TEST_TRANSPORT", 00:07:56.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:56.685 "adrfam": "ipv4", 00:07:56.685 "trsvcid": "$NVMF_PORT", 00:07:56.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:56.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:56.685 "hdgst": ${hdgst:-false}, 00:07:56.685 "ddgst": ${ddgst:-false} 00:07:56.685 }, 00:07:56.685 "method": "bdev_nvme_attach_controller" 00:07:56.685 } 00:07:56.685 EOF 00:07:56.685 )") 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:56.685 { 00:07:56.685 "params": { 00:07:56.685 "name": "Nvme$subsystem", 00:07:56.685 "trtype": "$TEST_TRANSPORT", 00:07:56.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:56.685 "adrfam": "ipv4", 00:07:56.685 "trsvcid": "$NVMF_PORT", 00:07:56.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:56.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:56.685 "hdgst": ${hdgst:-false}, 00:07:56.685 "ddgst": ${ddgst:-false} 00:07:56.685 }, 00:07:56.685 "method": "bdev_nvme_attach_controller" 00:07:56.685 } 00:07:56.685 EOF 00:07:56.685 )") 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4067503 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:56.685 "params": { 00:07:56.685 "name": "Nvme1", 00:07:56.685 "trtype": "tcp", 00:07:56.685 "traddr": "10.0.0.2", 00:07:56.685 "adrfam": "ipv4", 00:07:56.685 "trsvcid": "4420", 00:07:56.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:56.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:56.685 "hdgst": false, 00:07:56.685 "ddgst": false 00:07:56.685 }, 00:07:56.685 "method": "bdev_nvme_attach_controller" 00:07:56.685 }' 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:56.685 "params": { 00:07:56.685 "name": "Nvme1", 00:07:56.685 "trtype": "tcp", 00:07:56.685 "traddr": "10.0.0.2", 00:07:56.685 "adrfam": "ipv4", 00:07:56.685 "trsvcid": "4420", 00:07:56.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:56.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:56.685 "hdgst": false, 00:07:56.685 "ddgst": false 00:07:56.685 }, 00:07:56.685 "method": "bdev_nvme_attach_controller" 00:07:56.685 }' 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:56.685 "params": { 00:07:56.685 "name": "Nvme1", 00:07:56.685 "trtype": "tcp", 00:07:56.685 "traddr": "10.0.0.2", 00:07:56.685 "adrfam": "ipv4", 00:07:56.685 "trsvcid": "4420", 00:07:56.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:56.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:56.685 "hdgst": false, 00:07:56.685 "ddgst": false 00:07:56.685 }, 00:07:56.685 "method": "bdev_nvme_attach_controller" 00:07:56.685 }' 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:07:56.685 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:56.685 "params": { 00:07:56.685 "name": "Nvme1", 00:07:56.685 "trtype": "tcp", 00:07:56.685 "traddr": "10.0.0.2", 00:07:56.685 "adrfam": "ipv4", 00:07:56.685 "trsvcid": "4420", 00:07:56.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:56.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:56.685 "hdgst": false, 00:07:56.685 "ddgst": false 00:07:56.685 }, 00:07:56.685 "method": "bdev_nvme_attach_controller" 00:07:56.685 }' 00:07:56.685 [2024-12-09 11:43:04.546427] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:07:56.685 [2024-12-09 11:43:04.546428] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:07:56.685 [2024-12-09 11:43:04.546480] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 [2024-12-09 11:43:04.546481] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:07:56.685 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:56.685 [2024-12-09 11:43:04.547275] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:07:56.685 [2024-12-09 11:43:04.547321] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:56.685 [2024-12-09 11:43:04.548139] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:07:56.685 [2024-12-09 11:43:04.548187] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:56.947 [2024-12-09 11:43:04.705115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.947 [2024-12-09 11:43:04.733514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:56.947 [2024-12-09 11:43:04.763964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.947 [2024-12-09 11:43:04.792621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:56.947 [2024-12-09 11:43:04.826721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.208 [2024-12-09 11:43:04.856372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:57.208 [2024-12-09 11:43:04.873261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.208 [2024-12-09 11:43:04.901174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:57.208 Running I/O for 1 seconds... 00:07:57.208 Running I/O for 1 seconds... 00:07:57.208 Running I/O for 1 seconds... 00:07:57.469 Running I/O for 1 seconds... 00:07:58.419 10648.00 IOPS, 41.59 MiB/s 00:07:58.419 Latency(us) 00:07:58.419 [2024-12-09T10:43:06.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.419 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:58.419 Nvme1n1 : 1.01 10666.21 41.66 0.00 0.00 11933.69 5133.65 15728.64 00:07:58.419 [2024-12-09T10:43:06.305Z] =================================================================================================================== 00:07:58.419 [2024-12-09T10:43:06.305Z] Total : 10666.21 41.66 0.00 0.00 11933.69 5133.65 15728.64 00:07:58.419 10053.00 IOPS, 39.27 MiB/s [2024-12-09T10:43:06.305Z] 14853.00 IOPS, 58.02 MiB/s 00:07:58.419 Latency(us) 00:07:58.419 [2024-12-09T10:43:06.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.419 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:58.419 Nvme1n1 : 1.00 10143.58 39.62 0.00 0.00 12593.10 2594.13 28835.84 00:07:58.419 [2024-12-09T10:43:06.305Z] =================================================================================================================== 00:07:58.419 [2024-12-09T10:43:06.305Z] Total : 10143.58 39.62 0.00 0.00 12593.10 2594.13 28835.84 00:07:58.419 00:07:58.419 Latency(us) 00:07:58.419 [2024-12-09T10:43:06.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.419 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:58.419 Nvme1n1 : 1.01 14918.85 58.28 0.00 0.00 8555.95 4232.53 18786.99 00:07:58.419 [2024-12-09T10:43:06.305Z] =================================================================================================================== 00:07:58.419 [2024-12-09T10:43:06.306Z] Total : 14918.85 58.28 0.00 0.00 8555.95 4232.53 18786.99 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4067505 00:07:58.420 177240.00 IOPS, 692.34 MiB/s 00:07:58.420 Latency(us) 00:07:58.420 [2024-12-09T10:43:06.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.420 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:58.420 Nvme1n1 : 1.00 176891.92 690.98 0.00 0.00 719.51 298.67 1966.08 00:07:58.420 [2024-12-09T10:43:06.306Z] =================================================================================================================== 00:07:58.420 [2024-12-09T10:43:06.306Z] Total : 176891.92 690.98 0.00 0.00 719.51 298.67 1966.08 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4067507 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4067510 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # sync 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # set +e 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # for i in {1..20} 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:07:58.420 rmmod nvme_tcp 00:07:58.420 rmmod nvme_fabrics 00:07:58.420 rmmod nvme_keyring 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # set -e 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@130 -- # return 0 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 4067406 ']' 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 4067406 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 4067406 ']' 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 4067406 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.420 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4067406 00:07:58.681 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.681 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.681 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4067406' 00:07:58.681 killing process with pid 4067406 00:07:58.681 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 4067406 00:07:58.681 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 4067406 00:07:58.681 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:58.681 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:58.681 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:58.681 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # iptr 00:07:58.681 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:58.681 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:07:58.681 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:07:58.681 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:58.681 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # remove_spdk_ns 00:07:58.681 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.681 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.681 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.234 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:08:01.234 00:08:01.234 real 0m12.817s 00:08:01.234 user 0m18.560s 00:08:01.234 sys 0m7.051s 00:08:01.234 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.234 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:01.234 ************************************ 00:08:01.234 END TEST nvmf_bdev_io_wait 00:08:01.234 ************************************ 00:08:01.234 11:43:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:01.234 11:43:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:01.234 11:43:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.234 11:43:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:01.234 ************************************ 00:08:01.234 START TEST nvmf_queue_depth 00:08:01.234 ************************************ 00:08:01.234 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:01.234 * Looking for test storage... 00:08:01.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.234 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:01.234 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:01.234 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:01.234 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:01.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.235 --rc genhtml_branch_coverage=1 00:08:01.235 --rc genhtml_function_coverage=1 00:08:01.235 --rc genhtml_legend=1 00:08:01.235 --rc geninfo_all_blocks=1 00:08:01.235 --rc geninfo_unexecuted_blocks=1 00:08:01.235 00:08:01.235 ' 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:01.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.235 --rc genhtml_branch_coverage=1 00:08:01.235 --rc genhtml_function_coverage=1 00:08:01.235 --rc genhtml_legend=1 00:08:01.235 --rc geninfo_all_blocks=1 00:08:01.235 --rc geninfo_unexecuted_blocks=1 00:08:01.235 00:08:01.235 ' 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:01.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.235 --rc genhtml_branch_coverage=1 00:08:01.235 --rc genhtml_function_coverage=1 00:08:01.235 --rc genhtml_legend=1 00:08:01.235 --rc geninfo_all_blocks=1 00:08:01.235 --rc geninfo_unexecuted_blocks=1 00:08:01.235 00:08:01.235 ' 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:01.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.235 --rc genhtml_branch_coverage=1 00:08:01.235 --rc genhtml_function_coverage=1 00:08:01.235 --rc genhtml_legend=1 00:08:01.235 --rc geninfo_all_blocks=1 00:08:01.235 --rc geninfo_unexecuted_blocks=1 00:08:01.235 00:08:01.235 ' 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # : 0 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:08:01.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@56 -- # have_pci_nics=0 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.235 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:01.236 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:01.236 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:01.236 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.236 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.236 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.236 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:01.236 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:01.236 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # xtrace_disable 00:08:01.236 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_devs=() 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_devs 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_net_devs=() 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # pci_drivers=() 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # local -A pci_drivers 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # net_devs=() 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga net_devs 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # e810=() 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga e810 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # x722=() 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga x722 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@323 -- # mlx=() 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@323 -- # local -ga mlx 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:09.387 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:09.387 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:09.387 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:09.387 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.387 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.388 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:09.388 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:08:09.388 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:09.388 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:09.388 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:08:09.388 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:08:09.388 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:09.388 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.388 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:08:09.388 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:08:09.388 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:08:09.388 11:43:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:08:09.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:08:09.388 00:08:09.388 --- 10.0.0.2 ping statistics --- 00:08:09.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.388 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:09.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:08:09.388 00:08:09.388 --- 10.0.0.1 ping statistics --- 00:08:09.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.388 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=4072204 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 4072204 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 4072204 ']' 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.388 11:43:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.388 [2024-12-09 11:43:16.365898] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:08:09.388 [2024-12-09 11:43:16.365964] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.388 [2024-12-09 11:43:16.468305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.388 [2024-12-09 11:43:16.518438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.388 [2024-12-09 11:43:16.518490] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.388 [2024-12-09 11:43:16.518503] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.388 [2024-12-09 11:43:16.518511] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.388 [2024-12-09 11:43:16.518518] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.388 [2024-12-09 11:43:16.519262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.388 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.388 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:09.388 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:09.388 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:09.388 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.388 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.388 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:09.388 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.388 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.388 [2024-12-09 11:43:17.234375] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.388 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.388 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:09.388 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.388 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.388 Malloc0 00:08:09.388 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.388 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:09.388 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.388 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.650 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.650 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:09.650 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.650 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.650 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.650 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.650 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.650 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.650 [2024-12-09 11:43:17.295552] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.650 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.650 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4072424 00:08:09.650 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:09.650 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:09.650 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4072424 /var/tmp/bdevperf.sock 00:08:09.650 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 4072424 ']' 00:08:09.650 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:09.650 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.650 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:09.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:09.650 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.650 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.650 [2024-12-09 11:43:17.353252] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:08:09.650 [2024-12-09 11:43:17.353315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4072424 ] 00:08:09.650 [2024-12-09 11:43:17.444398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.650 [2024-12-09 11:43:17.496987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.595 11:43:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.595 11:43:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:10.595 11:43:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:10.595 11:43:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.595 11:43:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:10.595 NVMe0n1 00:08:10.595 11:43:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.595 11:43:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:10.595 Running I/O for 10 seconds... 00:08:12.920 8332.00 IOPS, 32.55 MiB/s [2024-12-09T10:43:21.377Z] 10212.00 IOPS, 39.89 MiB/s [2024-12-09T10:43:22.759Z] 10591.67 IOPS, 41.37 MiB/s [2024-12-09T10:43:23.707Z] 10902.00 IOPS, 42.59 MiB/s [2024-12-09T10:43:24.649Z] 11266.80 IOPS, 44.01 MiB/s [2024-12-09T10:43:25.592Z] 11607.67 IOPS, 45.34 MiB/s [2024-12-09T10:43:26.537Z] 11872.00 IOPS, 46.38 MiB/s [2024-12-09T10:43:27.480Z] 12154.75 IOPS, 47.48 MiB/s [2024-12-09T10:43:28.425Z] 12289.44 IOPS, 48.01 MiB/s [2024-12-09T10:43:28.686Z] 12428.10 IOPS, 48.55 MiB/s 00:08:20.800 Latency(us) 00:08:20.800 [2024-12-09T10:43:28.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.800 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:20.800 Verification LBA range: start 0x0 length 0x4000 00:08:20.800 NVMe0n1 : 10.05 12463.31 48.68 0.00 0.00 81849.63 10922.67 72526.51 00:08:20.800 [2024-12-09T10:43:28.686Z] =================================================================================================================== 00:08:20.800 [2024-12-09T10:43:28.686Z] Total : 12463.31 48.68 0.00 0.00 81849.63 10922.67 72526.51 00:08:20.800 { 00:08:20.801 "results": [ 00:08:20.801 { 00:08:20.801 "job": "NVMe0n1", 00:08:20.801 "core_mask": "0x1", 00:08:20.801 "workload": "verify", 00:08:20.801 "status": "finished", 00:08:20.801 "verify_range": { 00:08:20.801 "start": 0, 00:08:20.801 "length": 16384 00:08:20.801 }, 00:08:20.801 "queue_depth": 1024, 00:08:20.801 "io_size": 4096, 00:08:20.801 "runtime": 10.046125, 00:08:20.801 "iops": 12463.312968930806, 00:08:20.801 "mibps": 48.68481628488596, 00:08:20.801 "io_failed": 0, 00:08:20.801 "io_timeout": 0, 00:08:20.801 "avg_latency_us": 81849.62701595212, 00:08:20.801 "min_latency_us": 10922.666666666666, 00:08:20.801 "max_latency_us": 72526.50666666667 00:08:20.801 } 00:08:20.801 ], 00:08:20.801 "core_count": 1 00:08:20.801 } 00:08:20.801 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4072424 00:08:20.801 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 4072424 ']' 00:08:20.801 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 4072424 00:08:20.801 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:20.801 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.801 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4072424 00:08:20.801 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.801 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.801 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4072424' 00:08:20.801 killing process with pid 4072424 00:08:20.801 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 4072424 00:08:20.801 Received shutdown signal, test time was about 10.000000 seconds 00:08:20.801 00:08:20.801 Latency(us) 00:08:20.801 [2024-12-09T10:43:28.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.801 [2024-12-09T10:43:28.687Z] =================================================================================================================== 00:08:20.801 [2024-12-09T10:43:28.687Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:20.801 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 4072424 00:08:20.801 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:20.801 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:20.801 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:20.801 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # sync 00:08:20.801 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:08:20.801 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # set +e 00:08:20.801 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # for i in {1..20} 00:08:20.801 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:08:20.801 rmmod nvme_tcp 00:08:20.801 rmmod nvme_fabrics 00:08:20.801 rmmod nvme_keyring 00:08:21.061 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:08:21.061 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # set -e 00:08:21.061 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@130 -- # return 0 00:08:21.061 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 4072204 ']' 00:08:21.061 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 4072204 00:08:21.062 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 4072204 ']' 00:08:21.062 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 4072204 00:08:21.062 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:21.062 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.062 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4072204 00:08:21.062 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:21.062 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:21.062 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4072204' 00:08:21.062 killing process with pid 4072204 00:08:21.062 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 4072204 00:08:21.062 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 4072204 00:08:21.062 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:21.062 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:21.062 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:21.062 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # iptr 00:08:21.062 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:08:21.062 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:21.062 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:08:21.062 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:21.062 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # remove_spdk_ns 00:08:21.062 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.062 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.062 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.610 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:08:23.610 00:08:23.610 real 0m22.343s 00:08:23.610 user 0m25.620s 00:08:23.610 sys 0m7.008s 00:08:23.610 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.610 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.610 ************************************ 00:08:23.610 END TEST nvmf_queue_depth 00:08:23.610 ************************************ 00:08:23.610 11:43:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:23.610 11:43:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:23.610 11:43:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.610 11:43:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:23.610 ************************************ 00:08:23.610 START TEST nvmf_target_multipath 00:08:23.610 ************************************ 00:08:23.610 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:23.610 * Looking for test storage... 00:08:23.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.610 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:23.610 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:23.610 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:23.610 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:23.610 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:23.610 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:23.610 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:23.610 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.610 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:23.610 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:23.610 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:23.610 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:23.610 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:23.610 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:23.610 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:23.610 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:23.610 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:23.610 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:23.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.611 --rc genhtml_branch_coverage=1 00:08:23.611 --rc genhtml_function_coverage=1 00:08:23.611 --rc genhtml_legend=1 00:08:23.611 --rc geninfo_all_blocks=1 00:08:23.611 --rc geninfo_unexecuted_blocks=1 00:08:23.611 00:08:23.611 ' 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:23.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.611 --rc genhtml_branch_coverage=1 00:08:23.611 --rc genhtml_function_coverage=1 00:08:23.611 --rc genhtml_legend=1 00:08:23.611 --rc geninfo_all_blocks=1 00:08:23.611 --rc geninfo_unexecuted_blocks=1 00:08:23.611 00:08:23.611 ' 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:23.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.611 --rc genhtml_branch_coverage=1 00:08:23.611 --rc genhtml_function_coverage=1 00:08:23.611 --rc genhtml_legend=1 00:08:23.611 --rc geninfo_all_blocks=1 00:08:23.611 --rc geninfo_unexecuted_blocks=1 00:08:23.611 00:08:23.611 ' 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:23.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.611 --rc genhtml_branch_coverage=1 00:08:23.611 --rc genhtml_function_coverage=1 00:08:23.611 --rc genhtml_legend=1 00:08:23.611 --rc geninfo_all_blocks=1 00:08:23.611 --rc geninfo_unexecuted_blocks=1 00:08:23.611 00:08:23.611 ' 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # : 0 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:08:23.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@56 -- # have_pci_nics=0 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # xtrace_disable 00:08:23.611 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:31.755 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_devs=() 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_devs 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_net_devs=() 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # pci_drivers=() 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # local -A pci_drivers 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # net_devs=() 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga net_devs 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # e810=() 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga e810 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # x722=() 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga x722 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@323 -- # mlx=() 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@323 -- # local -ga mlx 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:31.756 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:31.756 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:31.756 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:31.756 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.756 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:08:31.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:08:31.757 00:08:31.757 --- 10.0.0.2 ping statistics --- 00:08:31.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.757 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:08:31.757 00:08:31.757 --- 10.0.0.1 ping statistics --- 00:08:31.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.757 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:31.757 only one NIC for nvmf test 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # sync 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # set +e 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # for i in {1..20} 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:08:31.757 rmmod nvme_tcp 00:08:31.757 rmmod nvme_fabrics 00:08:31.757 rmmod nvme_keyring 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # set -e 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@130 -- # return 0 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # iptr 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # remove_spdk_ns 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.757 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # sync 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # set +e 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # for i in {1..20} 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # set -e 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@130 -- # return 0 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # iptr 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # remove_spdk_ns 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:08:33.144 00:08:33.144 real 0m9.826s 00:08:33.144 user 0m2.170s 00:08:33.144 sys 0m5.583s 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:33.144 ************************************ 00:08:33.144 END TEST nvmf_target_multipath 00:08:33.144 ************************************ 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:33.144 ************************************ 00:08:33.144 START TEST nvmf_zcopy 00:08:33.144 ************************************ 00:08:33.144 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:33.406 * Looking for test storage... 00:08:33.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:33.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.406 --rc genhtml_branch_coverage=1 00:08:33.406 --rc genhtml_function_coverage=1 00:08:33.406 --rc genhtml_legend=1 00:08:33.406 --rc geninfo_all_blocks=1 00:08:33.406 --rc geninfo_unexecuted_blocks=1 00:08:33.406 00:08:33.406 ' 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:33.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.406 --rc genhtml_branch_coverage=1 00:08:33.406 --rc genhtml_function_coverage=1 00:08:33.406 --rc genhtml_legend=1 00:08:33.406 --rc geninfo_all_blocks=1 00:08:33.406 --rc geninfo_unexecuted_blocks=1 00:08:33.406 00:08:33.406 ' 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:33.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.406 --rc genhtml_branch_coverage=1 00:08:33.406 --rc genhtml_function_coverage=1 00:08:33.406 --rc genhtml_legend=1 00:08:33.406 --rc geninfo_all_blocks=1 00:08:33.406 --rc geninfo_unexecuted_blocks=1 00:08:33.406 00:08:33.406 ' 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:33.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.406 --rc genhtml_branch_coverage=1 00:08:33.406 --rc genhtml_function_coverage=1 00:08:33.406 --rc genhtml_legend=1 00:08:33.406 --rc geninfo_all_blocks=1 00:08:33.406 --rc geninfo_unexecuted_blocks=1 00:08:33.406 00:08:33.406 ' 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.406 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # : 0 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:08:33.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@56 -- # have_pci_nics=0 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # xtrace_disable 00:08:33.407 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_devs=() 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_devs 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_net_devs=() 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # pci_drivers=() 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # local -A pci_drivers 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # net_devs=() 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga net_devs 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # e810=() 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga e810 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # x722=() 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga x722 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@323 -- # mlx=() 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@323 -- # local -ga mlx 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:41.600 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:41.600 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:41.600 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:41.600 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:41.600 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:08:41.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:08:41.601 00:08:41.601 --- 10.0.0.2 ping statistics --- 00:08:41.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.601 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:41.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:08:41.601 00:08:41.601 --- 10.0.0.1 ping statistics --- 00:08:41.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.601 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=4083101 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 4083101 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 4083101 ']' 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.601 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.601 [2024-12-09 11:43:48.728629] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:08:41.601 [2024-12-09 11:43:48.728708] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.601 [2024-12-09 11:43:48.827712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.601 [2024-12-09 11:43:48.878275] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.601 [2024-12-09 11:43:48.878332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.601 [2024-12-09 11:43:48.878341] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.601 [2024-12-09 11:43:48.878348] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.601 [2024-12-09 11:43:48.878354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.601 [2024-12-09 11:43:48.879126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.882 [2024-12-09 11:43:49.592573] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.882 [2024-12-09 11:43:49.608858] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.882 malloc0 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.882 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:41.883 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:41.883 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:08:41.883 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:08:41.883 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:41.883 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:41.883 { 00:08:41.883 "params": { 00:08:41.883 "name": "Nvme$subsystem", 00:08:41.883 "trtype": "$TEST_TRANSPORT", 00:08:41.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.883 "adrfam": "ipv4", 00:08:41.883 "trsvcid": "$NVMF_PORT", 00:08:41.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.883 "hdgst": ${hdgst:-false}, 00:08:41.883 "ddgst": ${ddgst:-false} 00:08:41.883 }, 00:08:41.883 "method": "bdev_nvme_attach_controller" 00:08:41.883 } 00:08:41.883 EOF 00:08:41.883 )") 00:08:41.883 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:08:41.883 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:08:41.883 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:08:41.883 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:41.883 "params": { 00:08:41.883 "name": "Nvme1", 00:08:41.883 "trtype": "tcp", 00:08:41.883 "traddr": "10.0.0.2", 00:08:41.883 "adrfam": "ipv4", 00:08:41.883 "trsvcid": "4420", 00:08:41.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.883 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.883 "hdgst": false, 00:08:41.883 "ddgst": false 00:08:41.883 }, 00:08:41.883 "method": "bdev_nvme_attach_controller" 00:08:41.883 }' 00:08:41.883 [2024-12-09 11:43:49.697271] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:08:41.883 [2024-12-09 11:43:49.697333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4083278 ] 00:08:42.170 [2024-12-09 11:43:49.788415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.170 [2024-12-09 11:43:49.840975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.170 Running I/O for 10 seconds... 00:08:44.556 6488.00 IOPS, 50.69 MiB/s [2024-12-09T10:43:53.384Z] 6552.50 IOPS, 51.19 MiB/s [2024-12-09T10:43:54.327Z] 6563.33 IOPS, 51.28 MiB/s [2024-12-09T10:43:55.271Z] 6581.50 IOPS, 51.42 MiB/s [2024-12-09T10:43:56.213Z] 6587.00 IOPS, 51.46 MiB/s [2024-12-09T10:43:57.154Z] 6591.83 IOPS, 51.50 MiB/s [2024-12-09T10:43:58.096Z] 6802.71 IOPS, 53.15 MiB/s [2024-12-09T10:43:59.039Z] 7176.12 IOPS, 56.06 MiB/s [2024-12-09T10:44:00.422Z] 7468.44 IOPS, 58.35 MiB/s [2024-12-09T10:44:00.422Z] 7704.40 IOPS, 60.19 MiB/s 00:08:52.536 Latency(us) 00:08:52.536 [2024-12-09T10:44:00.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.536 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:52.536 Verification LBA range: start 0x0 length 0x1000 00:08:52.536 Nvme1n1 : 10.01 7706.70 60.21 0.00 0.00 16560.89 873.81 28398.93 00:08:52.536 [2024-12-09T10:44:00.422Z] =================================================================================================================== 00:08:52.536 [2024-12-09T10:44:00.422Z] Total : 7706.70 60.21 0.00 0.00 16560.89 873.81 28398.93 00:08:52.536 11:44:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4085303 00:08:52.536 11:44:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:52.536 11:44:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.536 11:44:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:52.536 11:44:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:52.536 11:44:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:08:52.536 11:44:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:08:52.536 11:44:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:52.536 11:44:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:52.536 { 00:08:52.536 "params": { 00:08:52.536 "name": "Nvme$subsystem", 00:08:52.536 "trtype": "$TEST_TRANSPORT", 00:08:52.536 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.536 "adrfam": "ipv4", 00:08:52.536 "trsvcid": "$NVMF_PORT", 00:08:52.536 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.536 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.536 "hdgst": ${hdgst:-false}, 00:08:52.536 "ddgst": ${ddgst:-false} 00:08:52.536 }, 00:08:52.536 "method": "bdev_nvme_attach_controller" 00:08:52.536 } 00:08:52.536 EOF 00:08:52.536 )") 00:08:52.536 [2024-12-09 11:44:00.150413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.536 [2024-12-09 11:44:00.150442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.536 11:44:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:08:52.536 11:44:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:08:52.536 [2024-12-09 11:44:00.158397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.536 [2024-12-09 11:44:00.158405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.536 11:44:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:08:52.536 11:44:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:52.536 "params": { 00:08:52.536 "name": "Nvme1", 00:08:52.536 "trtype": "tcp", 00:08:52.536 "traddr": "10.0.0.2", 00:08:52.536 "adrfam": "ipv4", 00:08:52.536 "trsvcid": "4420", 00:08:52.536 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.536 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.536 "hdgst": false, 00:08:52.536 "ddgst": false 00:08:52.536 }, 00:08:52.536 "method": "bdev_nvme_attach_controller" 00:08:52.536 }' 00:08:52.536 [2024-12-09 11:44:00.166417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.536 [2024-12-09 11:44:00.166425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.536 [2024-12-09 11:44:00.174436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.536 [2024-12-09 11:44:00.174443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.536 [2024-12-09 11:44:00.182456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.536 [2024-12-09 11:44:00.182462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.536 [2024-12-09 11:44:00.194203] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:08:52.536 [2024-12-09 11:44:00.194249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4085303 ] 00:08:52.536 [2024-12-09 11:44:00.194486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.536 [2024-12-09 11:44:00.194493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.536 [2024-12-09 11:44:00.202507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.536 [2024-12-09 11:44:00.202514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.536 [2024-12-09 11:44:00.210528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.536 [2024-12-09 11:44:00.210535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.536 [2024-12-09 11:44:00.218549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.536 [2024-12-09 11:44:00.218556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.536 [2024-12-09 11:44:00.226569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.536 [2024-12-09 11:44:00.226580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.536 [2024-12-09 11:44:00.234590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.536 [2024-12-09 11:44:00.234597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.536 [2024-12-09 11:44:00.242610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.537 [2024-12-09 11:44:00.242617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.537 [2024-12-09 11:44:00.250631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.537 [2024-12-09 11:44:00.250642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.537 [2024-12-09 11:44:00.258654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.537 [2024-12-09 11:44:00.258661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.537 [2024-12-09 11:44:00.266675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.537 [2024-12-09 11:44:00.266681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.537 [2024-12-09 11:44:00.274691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.537 [2024-12-09 11:44:00.274698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.537 [2024-12-09 11:44:00.276441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.537 [2024-12-09 11:44:00.282712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.537 [2024-12-09 11:44:00.282719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.537 [2024-12-09 11:44:00.290732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.537 [2024-12-09 11:44:00.290739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.537 [2024-12-09 11:44:00.298752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.537 [2024-12-09 11:44:00.298761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.537 [2024-12-09 11:44:00.305851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.537 [2024-12-09 11:44:00.306773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.537 [2024-12-09 11:44:00.306780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.537 [2024-12-09 11:44:00.314792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.537 [2024-12-09 11:44:00.314799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.537 [2024-12-09 11:44:00.322819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.537 [2024-12-09 11:44:00.322831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.537 [2024-12-09 11:44:00.330835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.537 [2024-12-09 11:44:00.330847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.537 [2024-12-09 11:44:00.338854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.537 [2024-12-09 11:44:00.338863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.537 [2024-12-09 11:44:00.346874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.537 [2024-12-09 11:44:00.346882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.537 [2024-12-09 11:44:00.354895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.537 [2024-12-09 11:44:00.354904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.537 [2024-12-09 11:44:00.362915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.537 [2024-12-09 11:44:00.362923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.537 [2024-12-09 11:44:00.370936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.537 [2024-12-09 11:44:00.370947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.537 [2024-12-09 11:44:00.378968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.537 [2024-12-09 11:44:00.378985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.537 [2024-12-09 11:44:00.386980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.537 [2024-12-09 11:44:00.386989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.537 [2024-12-09 11:44:00.395000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.537 [2024-12-09 11:44:00.395008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.537 [2024-12-09 11:44:00.403022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.537 [2024-12-09 11:44:00.403031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.537 [2024-12-09 11:44:00.411042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.537 [2024-12-09 11:44:00.411051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.537 [2024-12-09 11:44:00.419061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.537 [2024-12-09 11:44:00.419069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.798 [2024-12-09 11:44:00.427083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.798 [2024-12-09 11:44:00.427091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.798 [2024-12-09 11:44:00.435104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.798 [2024-12-09 11:44:00.435112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.798 [2024-12-09 11:44:00.443125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.798 [2024-12-09 11:44:00.443133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.798 [2024-12-09 11:44:00.451144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.798 [2024-12-09 11:44:00.451152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.798 [2024-12-09 11:44:00.459166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.798 [2024-12-09 11:44:00.459176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.798 [2024-12-09 11:44:00.467186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.798 [2024-12-09 11:44:00.467194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.798 [2024-12-09 11:44:00.475206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.798 [2024-12-09 11:44:00.475214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.798 [2024-12-09 11:44:00.483228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.798 [2024-12-09 11:44:00.483235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.798 [2024-12-09 11:44:00.491248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.798 [2024-12-09 11:44:00.491254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.798 [2024-12-09 11:44:00.499270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.798 [2024-12-09 11:44:00.499278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.798 [2024-12-09 11:44:00.507291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.798 [2024-12-09 11:44:00.507300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.798 [2024-12-09 11:44:00.515311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.798 [2024-12-09 11:44:00.515319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.798 [2024-12-09 11:44:00.523331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.798 [2024-12-09 11:44:00.523342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.798 [2024-12-09 11:44:00.531353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.798 [2024-12-09 11:44:00.531360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.798 [2024-12-09 11:44:00.539374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.798 [2024-12-09 11:44:00.539382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.798 [2024-12-09 11:44:00.547395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.799 [2024-12-09 11:44:00.547403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.799 [2024-12-09 11:44:00.555519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.799 [2024-12-09 11:44:00.555533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.799 [2024-12-09 11:44:00.563443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.799 [2024-12-09 11:44:00.563454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.799 Running I/O for 5 seconds... 00:08:52.799 [2024-12-09 11:44:00.571461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.799 [2024-12-09 11:44:00.571469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.799 [2024-12-09 11:44:00.581598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.799 [2024-12-09 11:44:00.581614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.799 [2024-12-09 11:44:00.589862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.799 [2024-12-09 11:44:00.589878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.799 [2024-12-09 11:44:00.598479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.799 [2024-12-09 11:44:00.598494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.799 [2024-12-09 11:44:00.606347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.799 [2024-12-09 11:44:00.606362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.799 [2024-12-09 11:44:00.615499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.799 [2024-12-09 11:44:00.615515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.799 [2024-12-09 11:44:00.624170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.799 [2024-12-09 11:44:00.624186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.799 [2024-12-09 11:44:00.632833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.799 [2024-12-09 11:44:00.632849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.799 [2024-12-09 11:44:00.641097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.799 [2024-12-09 11:44:00.641111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.799 [2024-12-09 11:44:00.650131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.799 [2024-12-09 11:44:00.650146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.799 [2024-12-09 11:44:00.659336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.799 [2024-12-09 11:44:00.659351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.799 [2024-12-09 11:44:00.667900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.799 [2024-12-09 11:44:00.667916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.799 [2024-12-09 11:44:00.676423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.799 [2024-12-09 11:44:00.676438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.685032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.685048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.693994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.694008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.703074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.703088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.711614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.711628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.720326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.720342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.728728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.728743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.737133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.737149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.745914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.745929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.754821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.754836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.763903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.763918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.772597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.772613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.781307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.781322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.790066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.790081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.798802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.798817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.806939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.806954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.815261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.815275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.823720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.823735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.832878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.832892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.840939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.840953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.850181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.850196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.858712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.858727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.867418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.867433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.876106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.876121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.884617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.884632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.893354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.893368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.902051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.902065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.910812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.910827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.919624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.919642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.928572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.928588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.060 [2024-12-09 11:44:00.936930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.060 [2024-12-09 11:44:00.936945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:00.945727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:00.945742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:00.954399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:00.954414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:00.963387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:00.963402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:00.972043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:00.972057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:00.981180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:00.981195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:00.990143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:00.990158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:00.999176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:00.999191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:01.007570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:01.007585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:01.016317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:01.016332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:01.025705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:01.025720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:01.034304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:01.034319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:01.043369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:01.043383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:01.051930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:01.051944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:01.061025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:01.061040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:01.070033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:01.070047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:01.078379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:01.078393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:01.087074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:01.087088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:01.095567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:01.095581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:01.104701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:01.104716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:01.113207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:01.113222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:01.121828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:01.121842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:01.130818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:01.130833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:01.139986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:01.140001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:01.147847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:01.147861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:01.157166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:01.157181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:01.165731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:01.165745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:01.174278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:01.174295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:01.183065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:01.183079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:01.191736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:01.191750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.321 [2024-12-09 11:44:01.200287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.321 [2024-12-09 11:44:01.200300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.208917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.208931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.218012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.218026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.227127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.227141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.236168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.236182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.244952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.244967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.253869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.253883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.263073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.263087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.271631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.271649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.280571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.280585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.289141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.289155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.298112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.298127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.306833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.306847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.314962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.314976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.323894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.323909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.333136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.333150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.342342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.342360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.350752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.350767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.359717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.359732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.368192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.368206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.376988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.377002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.385919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.385934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.394289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.394304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.402903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.402917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.411277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.411291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.420032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.420046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.429065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.429079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.437972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.437986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.446549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.446563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.455157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.455172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.582 [2024-12-09 11:44:01.463555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.582 [2024-12-09 11:44:01.463570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.842 [2024-12-09 11:44:01.472188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.842 [2024-12-09 11:44:01.472203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.481234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.481249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.489977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.489992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.498883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.498898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.507939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.507958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.516625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.516645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.525787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.525801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.534961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.534975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.543397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.543411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.552516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.552530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.560980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.560994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.569421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.569435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 19063.00 IOPS, 148.93 MiB/s [2024-12-09T10:44:01.729Z] [2024-12-09 11:44:01.577336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.577350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.586607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.586622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.594899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.594913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.603845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.603859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.612719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.612733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.621780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.621794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.630238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.630252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.638595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.638609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.647410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.647424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.656593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.656607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.664773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.664788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.673506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.673521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.682127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.682142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.690846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.690859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.699551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.699565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.708424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.708438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.716754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.716768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.843 [2024-12-09 11:44:01.725479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.843 [2024-12-09 11:44:01.725493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.103 [2024-12-09 11:44:01.733752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.103 [2024-12-09 11:44:01.733767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.103 [2024-12-09 11:44:01.742535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.103 [2024-12-09 11:44:01.742549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.103 [2024-12-09 11:44:01.750955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.103 [2024-12-09 11:44:01.750969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.103 [2024-12-09 11:44:01.759653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.103 [2024-12-09 11:44:01.759667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.103 [2024-12-09 11:44:01.768195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.103 [2024-12-09 11:44:01.768209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.777029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.777043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.785426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.785440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.794020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.794035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.807239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.807254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.815201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.815216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.824031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.824045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.833032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.833046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.842208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.842222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.850234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.850249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.858984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.858998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.867542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.867556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.876741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.876755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.885226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.885240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.893790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.893804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.902764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.902778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.911809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.911824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.920338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.920352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.929083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.929098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.938136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.938151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.946990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.947004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.955856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.955871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.964274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.964289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.973545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.973559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.104 [2024-12-09 11:44:01.981872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.104 [2024-12-09 11:44:01.981887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.364 [2024-12-09 11:44:01.990924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.364 [2024-12-09 11:44:01.990938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.364 [2024-12-09 11:44:01.999477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.364 [2024-12-09 11:44:01.999491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.364 [2024-12-09 11:44:02.008825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.364 [2024-12-09 11:44:02.008840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.364 [2024-12-09 11:44:02.016782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.364 [2024-12-09 11:44:02.016797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.364 [2024-12-09 11:44:02.025587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.364 [2024-12-09 11:44:02.025602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.364 [2024-12-09 11:44:02.035046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.364 [2024-12-09 11:44:02.035060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.364 [2024-12-09 11:44:02.043613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.364 [2024-12-09 11:44:02.043627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.364 [2024-12-09 11:44:02.052218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.364 [2024-12-09 11:44:02.052232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.364 [2024-12-09 11:44:02.060646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.364 [2024-12-09 11:44:02.060660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.364 [2024-12-09 11:44:02.069163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.364 [2024-12-09 11:44:02.069177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.364 [2024-12-09 11:44:02.077834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.364 [2024-12-09 11:44:02.077849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.364 [2024-12-09 11:44:02.086297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.364 [2024-12-09 11:44:02.086312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.364 [2024-12-09 11:44:02.095579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.364 [2024-12-09 11:44:02.095593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.364 [2024-12-09 11:44:02.103488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.364 [2024-12-09 11:44:02.103502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.364 [2024-12-09 11:44:02.112392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.364 [2024-12-09 11:44:02.112406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.364 [2024-12-09 11:44:02.120984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.364 [2024-12-09 11:44:02.120998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.364 [2024-12-09 11:44:02.129992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.364 [2024-12-09 11:44:02.130006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.364 [2024-12-09 11:44:02.138762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.364 [2024-12-09 11:44:02.138777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.364 [2024-12-09 11:44:02.148054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.364 [2024-12-09 11:44:02.148069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.364 [2024-12-09 11:44:02.156866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.364 [2024-12-09 11:44:02.156881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.364 [2024-12-09 11:44:02.165718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.364 [2024-12-09 11:44:02.165733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.365 [2024-12-09 11:44:02.174832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.365 [2024-12-09 11:44:02.174847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.365 [2024-12-09 11:44:02.182761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.365 [2024-12-09 11:44:02.182776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.365 [2024-12-09 11:44:02.191468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.365 [2024-12-09 11:44:02.191483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.365 [2024-12-09 11:44:02.200360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.365 [2024-12-09 11:44:02.200375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.365 [2024-12-09 11:44:02.209419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.365 [2024-12-09 11:44:02.209433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.365 [2024-12-09 11:44:02.218566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.365 [2024-12-09 11:44:02.218581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.365 [2024-12-09 11:44:02.227093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.365 [2024-12-09 11:44:02.227108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.365 [2024-12-09 11:44:02.235835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.365 [2024-12-09 11:44:02.235851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.365 [2024-12-09 11:44:02.245022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.365 [2024-12-09 11:44:02.245037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.625 [2024-12-09 11:44:02.253675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.625 [2024-12-09 11:44:02.253689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.625 [2024-12-09 11:44:02.262433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.625 [2024-12-09 11:44:02.262448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.625 [2024-12-09 11:44:02.271193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.625 [2024-12-09 11:44:02.271207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.625 [2024-12-09 11:44:02.280148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.625 [2024-12-09 11:44:02.280162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.625 [2024-12-09 11:44:02.288712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.625 [2024-12-09 11:44:02.288726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.625 [2024-12-09 11:44:02.297593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.625 [2024-12-09 11:44:02.297608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.625 [2024-12-09 11:44:02.306563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.625 [2024-12-09 11:44:02.306578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.625 [2024-12-09 11:44:02.315453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.625 [2024-12-09 11:44:02.315468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.625 [2024-12-09 11:44:02.324460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.625 [2024-12-09 11:44:02.324475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.625 [2024-12-09 11:44:02.333035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.625 [2024-12-09 11:44:02.333054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.625 [2024-12-09 11:44:02.341671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.625 [2024-12-09 11:44:02.341686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.625 [2024-12-09 11:44:02.350674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.625 [2024-12-09 11:44:02.350689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.625 [2024-12-09 11:44:02.359696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.625 [2024-12-09 11:44:02.359711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.625 [2024-12-09 11:44:02.367580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.625 [2024-12-09 11:44:02.367595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.625 [2024-12-09 11:44:02.377252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.626 [2024-12-09 11:44:02.377267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.626 [2024-12-09 11:44:02.385199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.626 [2024-12-09 11:44:02.385214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.626 [2024-12-09 11:44:02.394038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.626 [2024-12-09 11:44:02.394053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.626 [2024-12-09 11:44:02.403213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.626 [2024-12-09 11:44:02.403228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.626 [2024-12-09 11:44:02.411801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.626 [2024-12-09 11:44:02.411815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.626 [2024-12-09 11:44:02.421029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.626 [2024-12-09 11:44:02.421044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.626 [2024-12-09 11:44:02.429688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.626 [2024-12-09 11:44:02.429704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.626 [2024-12-09 11:44:02.438657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.626 [2024-12-09 11:44:02.438673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.626 [2024-12-09 11:44:02.447291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.626 [2024-12-09 11:44:02.447306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.626 [2024-12-09 11:44:02.456418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.626 [2024-12-09 11:44:02.456433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.626 [2024-12-09 11:44:02.465476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.626 [2024-12-09 11:44:02.465491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.626 [2024-12-09 11:44:02.473792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.626 [2024-12-09 11:44:02.473806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.626 [2024-12-09 11:44:02.482622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.626 [2024-12-09 11:44:02.482642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.626 [2024-12-09 11:44:02.491544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.626 [2024-12-09 11:44:02.491559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.626 [2024-12-09 11:44:02.500248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.626 [2024-12-09 11:44:02.500267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.626 [2024-12-09 11:44:02.509311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.626 [2024-12-09 11:44:02.509326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.886 [2024-12-09 11:44:02.518252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.886 [2024-12-09 11:44:02.518267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.886 [2024-12-09 11:44:02.527394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.886 [2024-12-09 11:44:02.527410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.536002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.536017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.544626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.544646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.553844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.553858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.561893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.561908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.570511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.570525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 19113.00 IOPS, 149.32 MiB/s [2024-12-09T10:44:02.773Z] [2024-12-09 11:44:02.579229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.579244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.587860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.587874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.596711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.596726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.605277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.605292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.614512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.614528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.622696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.622711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.631502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.631517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.640396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.640411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.649004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.649019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.657902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.657917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.666709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.666728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.675418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.675433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.684298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.684312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.693292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.693307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.701897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.701912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.710541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.710556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.719835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.719850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.728453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.728468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.737446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.737461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.746217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.746232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.754771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.754786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.887 [2024-12-09 11:44:02.763912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.887 [2024-12-09 11:44:02.763927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.772325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.148 [2024-12-09 11:44:02.772340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.780720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.148 [2024-12-09 11:44:02.780735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.789917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.148 [2024-12-09 11:44:02.789932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.798331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.148 [2024-12-09 11:44:02.798345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.807169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.148 [2024-12-09 11:44:02.807183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.816327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.148 [2024-12-09 11:44:02.816342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.824829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.148 [2024-12-09 11:44:02.824844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.833676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.148 [2024-12-09 11:44:02.833691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.842333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.148 [2024-12-09 11:44:02.842348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.851073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.148 [2024-12-09 11:44:02.851087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.860222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.148 [2024-12-09 11:44:02.860237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.869157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.148 [2024-12-09 11:44:02.869171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.877693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.148 [2024-12-09 11:44:02.877707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.886391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.148 [2024-12-09 11:44:02.886406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.894889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.148 [2024-12-09 11:44:02.894903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.903664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.148 [2024-12-09 11:44:02.903678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.912634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.148 [2024-12-09 11:44:02.912652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.921253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.148 [2024-12-09 11:44:02.921267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.930314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.148 [2024-12-09 11:44:02.930328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.938763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.148 [2024-12-09 11:44:02.938777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.947387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.148 [2024-12-09 11:44:02.947402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.956485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.148 [2024-12-09 11:44:02.956500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.965170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.148 [2024-12-09 11:44:02.965184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.148 [2024-12-09 11:44:02.973977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.149 [2024-12-09 11:44:02.973992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.149 [2024-12-09 11:44:02.982579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.149 [2024-12-09 11:44:02.982593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.149 [2024-12-09 11:44:02.991660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.149 [2024-12-09 11:44:02.991674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.149 [2024-12-09 11:44:03.000655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.149 [2024-12-09 11:44:03.000670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.149 [2024-12-09 11:44:03.009579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.149 [2024-12-09 11:44:03.009594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.149 [2024-12-09 11:44:03.018306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.149 [2024-12-09 11:44:03.018320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.149 [2024-12-09 11:44:03.027201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.149 [2024-12-09 11:44:03.027215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.036098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.036113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.045075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.045089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.054184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.054199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.062791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.062806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.071917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.071932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.080645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.080660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.089465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.089479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.097886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.097901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.106810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.106825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.115839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.115853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.124724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.124738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.133253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.133267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.141753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.141767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.150281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.150295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.159252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.159266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.167924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.167938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.176936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.176950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.185945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.185959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.194992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.195006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.204101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.204115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.212802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.212816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.221581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.221596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.230371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.230386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.239113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.239127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.248205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.248219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.256780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.256794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.265184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.265198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.274101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.274116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.282831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.282845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.410 [2024-12-09 11:44:03.291858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.410 [2024-12-09 11:44:03.291872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.671 [2024-12-09 11:44:03.300481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.671 [2024-12-09 11:44:03.300496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.671 [2024-12-09 11:44:03.309627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.671 [2024-12-09 11:44:03.309646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.671 [2024-12-09 11:44:03.318772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.671 [2024-12-09 11:44:03.318786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.671 [2024-12-09 11:44:03.327908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.671 [2024-12-09 11:44:03.327926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.671 [2024-12-09 11:44:03.336311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.671 [2024-12-09 11:44:03.336325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.671 [2024-12-09 11:44:03.345253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.671 [2024-12-09 11:44:03.345268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.671 [2024-12-09 11:44:03.353912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.353927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.672 [2024-12-09 11:44:03.362346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.362360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.672 [2024-12-09 11:44:03.371567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.371581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.672 [2024-12-09 11:44:03.380466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.380481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.672 [2024-12-09 11:44:03.389608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.389622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.672 [2024-12-09 11:44:03.398157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.398172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.672 [2024-12-09 11:44:03.407227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.407241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.672 [2024-12-09 11:44:03.415504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.415519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.672 [2024-12-09 11:44:03.424482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.424497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.672 [2024-12-09 11:44:03.433503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.433518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.672 [2024-12-09 11:44:03.441463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.441478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.672 [2024-12-09 11:44:03.450406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.450421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.672 [2024-12-09 11:44:03.459356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.459371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.672 [2024-12-09 11:44:03.468396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.468410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.672 [2024-12-09 11:44:03.477546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.477561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.672 [2024-12-09 11:44:03.486399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.486413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.672 [2024-12-09 11:44:03.495144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.495164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.672 [2024-12-09 11:44:03.503924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.503938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.672 [2024-12-09 11:44:03.512339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.512354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.672 [2024-12-09 11:44:03.521076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.521090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.672 [2024-12-09 11:44:03.529272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.529287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.672 [2024-12-09 11:44:03.538203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.538217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.672 [2024-12-09 11:44:03.546500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.546514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.672 [2024-12-09 11:44:03.555879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.672 [2024-12-09 11:44:03.555894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.564938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.564952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.573488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.573502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 19168.33 IOPS, 149.75 MiB/s [2024-12-09T10:44:03.819Z] [2024-12-09 11:44:03.582003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.582018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.591088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.591102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.599621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.599636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.608665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.608680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.617912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.617926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.626340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.626355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.634874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.634888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.643752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.643766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.652774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.652789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.661163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.661181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.669660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.669674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.678581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.678596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.687395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.687410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.695444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.695459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.704054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.704068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.712962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.712976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.721840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.721855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.730921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.730936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.738857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.738871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.747984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.747998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.756366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.756381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.765008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.765022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.773716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.773731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.782755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.782769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.791940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.791954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.800992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.801006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.933 [2024-12-09 11:44:03.809773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.933 [2024-12-09 11:44:03.809788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:03.818461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:03.818476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:03.826879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:03.826893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:03.835300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:03.835315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:03.844294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:03.844309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:03.852972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:03.852986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:03.862006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:03.862020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:03.870509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:03.870523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:03.879042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:03.879056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:03.888386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:03.888402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:03.896354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:03.896369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:03.905367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:03.905383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:03.913839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:03.913854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:03.922745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:03.922760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:03.931152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:03.931168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:03.940084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:03.940099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:03.948711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:03.948725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:03.957592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:03.957606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:03.966558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:03.966573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:03.975329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:03.975344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:03.984405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:03.984420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:03.993409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:03.993423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:04.001899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:04.001914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:04.011044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:04.011059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:04.019254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:04.019268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:04.028101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:04.028116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:04.037153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:04.037168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:04.046274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:04.046288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.194 [2024-12-09 11:44:04.054652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.194 [2024-12-09 11:44:04.054666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.195 [2024-12-09 11:44:04.062807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.195 [2024-12-09 11:44:04.062821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.195 [2024-12-09 11:44:04.071690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.195 [2024-12-09 11:44:04.071705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.455 [2024-12-09 11:44:04.080436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.455 [2024-12-09 11:44:04.080452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.455 [2024-12-09 11:44:04.088907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.455 [2024-12-09 11:44:04.088921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.455 [2024-12-09 11:44:04.097889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.455 [2024-12-09 11:44:04.097903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.455 [2024-12-09 11:44:04.106941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.455 [2024-12-09 11:44:04.106956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.455 [2024-12-09 11:44:04.116057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.455 [2024-12-09 11:44:04.116072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.455 [2024-12-09 11:44:04.124773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.455 [2024-12-09 11:44:04.124788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.455 [2024-12-09 11:44:04.133922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.455 [2024-12-09 11:44:04.133937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.455 [2024-12-09 11:44:04.142377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.455 [2024-12-09 11:44:04.142392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.455 [2024-12-09 11:44:04.151280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.455 [2024-12-09 11:44:04.151294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.455 [2024-12-09 11:44:04.160236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.455 [2024-12-09 11:44:04.160251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.455 [2024-12-09 11:44:04.169311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.455 [2024-12-09 11:44:04.169326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.455 [2024-12-09 11:44:04.178244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.455 [2024-12-09 11:44:04.178259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.455 [2024-12-09 11:44:04.186731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.455 [2024-12-09 11:44:04.186745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.455 [2024-12-09 11:44:04.195316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.455 [2024-12-09 11:44:04.195330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.455 [2024-12-09 11:44:04.203627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.455 [2024-12-09 11:44:04.203646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.455 [2024-12-09 11:44:04.212070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.455 [2024-12-09 11:44:04.212085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.455 [2024-12-09 11:44:04.219806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.455 [2024-12-09 11:44:04.219821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.455 [2024-12-09 11:44:04.229382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.455 [2024-12-09 11:44:04.229397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.456 [2024-12-09 11:44:04.237914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.456 [2024-12-09 11:44:04.237929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.456 [2024-12-09 11:44:04.246314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.456 [2024-12-09 11:44:04.246328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.456 [2024-12-09 11:44:04.255492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.456 [2024-12-09 11:44:04.255507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.456 [2024-12-09 11:44:04.264550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.456 [2024-12-09 11:44:04.264564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.456 [2024-12-09 11:44:04.273515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.456 [2024-12-09 11:44:04.273530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.456 [2024-12-09 11:44:04.282535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.456 [2024-12-09 11:44:04.282550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.456 [2024-12-09 11:44:04.291444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.456 [2024-12-09 11:44:04.291459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.456 [2024-12-09 11:44:04.300506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.456 [2024-12-09 11:44:04.300521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.456 [2024-12-09 11:44:04.309082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.456 [2024-12-09 11:44:04.309096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.456 [2024-12-09 11:44:04.317825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.456 [2024-12-09 11:44:04.317840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.456 [2024-12-09 11:44:04.326894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.456 [2024-12-09 11:44:04.326909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.456 [2024-12-09 11:44:04.335569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.456 [2024-12-09 11:44:04.335584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.716 [2024-12-09 11:44:04.343321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.716 [2024-12-09 11:44:04.343335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.716 [2024-12-09 11:44:04.352738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.716 [2024-12-09 11:44:04.352753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.716 [2024-12-09 11:44:04.361289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.716 [2024-12-09 11:44:04.361304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.716 [2024-12-09 11:44:04.370281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.716 [2024-12-09 11:44:04.370296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.716 [2024-12-09 11:44:04.379176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.716 [2024-12-09 11:44:04.379191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.716 [2024-12-09 11:44:04.388402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.716 [2024-12-09 11:44:04.388417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.716 [2024-12-09 11:44:04.396229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.716 [2024-12-09 11:44:04.396244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.716 [2024-12-09 11:44:04.404763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.716 [2024-12-09 11:44:04.404778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.716 [2024-12-09 11:44:04.418172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.716 [2024-12-09 11:44:04.418187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.716 [2024-12-09 11:44:04.426258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.716 [2024-12-09 11:44:04.426273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.716 [2024-12-09 11:44:04.435095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.716 [2024-12-09 11:44:04.435110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.716 [2024-12-09 11:44:04.443649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.716 [2024-12-09 11:44:04.443664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.716 [2024-12-09 11:44:04.452249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.716 [2024-12-09 11:44:04.452264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.716 [2024-12-09 11:44:04.461369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.716 [2024-12-09 11:44:04.461384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.716 [2024-12-09 11:44:04.469990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.716 [2024-12-09 11:44:04.470006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.716 [2024-12-09 11:44:04.479271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.716 [2024-12-09 11:44:04.479286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.716 [2024-12-09 11:44:04.488277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.716 [2024-12-09 11:44:04.488296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.716 [2024-12-09 11:44:04.496691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.716 [2024-12-09 11:44:04.496705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.716 [2024-12-09 11:44:04.505390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.716 [2024-12-09 11:44:04.505405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.716 [2024-12-09 11:44:04.514116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.716 [2024-12-09 11:44:04.514131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.716 [2024-12-09 11:44:04.522942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.717 [2024-12-09 11:44:04.522957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.717 [2024-12-09 11:44:04.531794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.717 [2024-12-09 11:44:04.531809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.717 [2024-12-09 11:44:04.540649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.717 [2024-12-09 11:44:04.540664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.717 [2024-12-09 11:44:04.549464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.717 [2024-12-09 11:44:04.549479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.717 [2024-12-09 11:44:04.558026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.717 [2024-12-09 11:44:04.558040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.717 [2024-12-09 11:44:04.566643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.717 [2024-12-09 11:44:04.566657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.717 [2024-12-09 11:44:04.575423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.717 [2024-12-09 11:44:04.575438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.717 19193.50 IOPS, 149.95 MiB/s [2024-12-09T10:44:04.603Z] [2024-12-09 11:44:04.584511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.717 [2024-12-09 11:44:04.584525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.717 [2024-12-09 11:44:04.593360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.717 [2024-12-09 11:44:04.593374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.601843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.601858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.610973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.610987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.619988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.620002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.629078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.629092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.636968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.636982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.646313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.646328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.655030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.655048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.663469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.663483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.672435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.672450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.680877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.680892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.689382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.689396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.697668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.697683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.706693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.706707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.715772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.715786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.724286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.724300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.732851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.732865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.742210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.742224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.750073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.750087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.759290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.759304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.768365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.768380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.777055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.777070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.785852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.785866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.794332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.794347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.802970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.802984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.811725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.811740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.819597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.819614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.829155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.829170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.837340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.837355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.846171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.846185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.977 [2024-12-09 11:44:04.854750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.977 [2024-12-09 11:44:04.854764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:04.863719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:04.863733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:04.872486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:04.872500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:04.881420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:04.881434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:04.889882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:04.889896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:04.898776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:04.898791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:04.907555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:04.907570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:04.916697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:04.916712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:04.925418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:04.925432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:04.934271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:04.934286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:04.943392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:04.943407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:04.952014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:04.952028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:04.960994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:04.961009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:04.970207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:04.970221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:04.978721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:04.978735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:04.987121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:04.987135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:04.996017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:04.996031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:05.004451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:05.004466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:05.012954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:05.012968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:05.021772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:05.021787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:05.030589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:05.030603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:05.039465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:05.039479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:05.047903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:05.047917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:05.056369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:05.056383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:05.064849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:05.064863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:05.073857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:05.073871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:05.082818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:05.082833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:05.091972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:05.091986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:05.099856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:05.099870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:05.109123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:05.109137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.238 [2024-12-09 11:44:05.117320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.238 [2024-12-09 11:44:05.117334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.125711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.125726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.134951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.134966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.143569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.143584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.152808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.152823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.161626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.161643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.170017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.170032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.178891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.178905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.187509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.187523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.196185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.196199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.204925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.204939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.212921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.212936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.221541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.221555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.230394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.230407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.238839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.238853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.247675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.247689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.256598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.256613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.265582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.265596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.274646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.274662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.283363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.283377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.292112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.292126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.300620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.300634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.309554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.309568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.318613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.318627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.327620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.327634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.336700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.336714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.345235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.345249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.353802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.353816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.362558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.362572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.371309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.371323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-12-09 11:44:05.380010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-12-09 11:44:05.380025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.388851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.388866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.397432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.397447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.406375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.406390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.414899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.414914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.423946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.423960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.431760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.431774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.440812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.440826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.449551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.449565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.457906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.457920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.466917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.466931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.475469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.475484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.484527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.484541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.493066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.493080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.501787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.501802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.510863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.510878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.519621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.519635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.528013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.528028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.536544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.536558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.545115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.545129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.553365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.553379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.562218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.562233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.570677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.570692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 19204.60 IOPS, 150.04 MiB/s [2024-12-09T10:44:05.646Z] [2024-12-09 11:44:05.579571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.579585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.585221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.585234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 00:08:57.760 Latency(us) 00:08:57.760 [2024-12-09T10:44:05.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.760 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:57.760 Nvme1n1 : 5.01 19207.43 150.06 0.00 0.00 6658.47 2416.64 18786.99 00:08:57.760 [2024-12-09T10:44:05.646Z] =================================================================================================================== 00:08:57.760 [2024-12-09T10:44:05.646Z] Total : 19207.43 150.06 0.00 0.00 6658.47 2416.64 18786.99 00:08:57.760 [2024-12-09 11:44:05.593239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.593250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.601262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.601274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.609281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.609299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.617301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.617312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.625322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.760 [2024-12-09 11:44:05.625332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.760 [2024-12-09 11:44:05.633341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.761 [2024-12-09 11:44:05.633351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.761 [2024-12-09 11:44:05.641359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.761 [2024-12-09 11:44:05.641368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.021 [2024-12-09 11:44:05.649378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.021 [2024-12-09 11:44:05.649388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.021 [2024-12-09 11:44:05.657399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.021 [2024-12-09 11:44:05.657407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.021 [2024-12-09 11:44:05.665421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.021 [2024-12-09 11:44:05.665430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.021 [2024-12-09 11:44:05.673441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.021 [2024-12-09 11:44:05.673450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.021 [2024-12-09 11:44:05.681459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.021 [2024-12-09 11:44:05.681467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4085303) - No such process 00:08:58.021 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4085303 00:08:58.021 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.021 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.021 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.021 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.021 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:58.021 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.021 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.021 delay0 00:08:58.021 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.021 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:58.021 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.021 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.021 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.021 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:58.021 [2024-12-09 11:44:05.802208] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:06.159 Initializing NVMe Controllers 00:09:06.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:06.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:06.159 Initialization complete. Launching workers. 00:09:06.159 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 243, failed: 34535 00:09:06.159 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 34667, failed to submit 111 00:09:06.159 success 34554, unsuccessful 113, failed 0 00:09:06.159 11:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:06.159 11:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:06.159 11:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:06.159 11:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # sync 00:09:06.159 11:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:09:06.159 11:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # set +e 00:09:06.159 11:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # for i in {1..20} 00:09:06.159 11:44:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:09:06.159 rmmod nvme_tcp 00:09:06.159 rmmod nvme_fabrics 00:09:06.159 rmmod nvme_keyring 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # set -e 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@130 -- # return 0 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 4083101 ']' 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 4083101 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 4083101 ']' 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 4083101 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4083101 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4083101' 00:09:06.159 killing process with pid 4083101 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 4083101 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 4083101 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # iptr 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # remove_spdk_ns 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.159 11:44:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.544 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:09:07.544 00:09:07.544 real 0m34.351s 00:09:07.544 user 0m45.463s 00:09:07.544 sys 0m12.266s 00:09:07.544 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.544 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.544 ************************************ 00:09:07.544 END TEST nvmf_zcopy 00:09:07.544 ************************************ 00:09:07.544 11:44:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:07.544 11:44:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:07.544 11:44:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.544 11:44:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:07.544 ************************************ 00:09:07.544 START TEST nvmf_nmic 00:09:07.544 ************************************ 00:09:07.544 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:07.807 * Looking for test storage... 00:09:07.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:07.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.807 --rc genhtml_branch_coverage=1 00:09:07.807 --rc genhtml_function_coverage=1 00:09:07.807 --rc genhtml_legend=1 00:09:07.807 --rc geninfo_all_blocks=1 00:09:07.807 --rc geninfo_unexecuted_blocks=1 00:09:07.807 00:09:07.807 ' 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:07.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.807 --rc genhtml_branch_coverage=1 00:09:07.807 --rc genhtml_function_coverage=1 00:09:07.807 --rc genhtml_legend=1 00:09:07.807 --rc geninfo_all_blocks=1 00:09:07.807 --rc geninfo_unexecuted_blocks=1 00:09:07.807 00:09:07.807 ' 00:09:07.807 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:07.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.808 --rc genhtml_branch_coverage=1 00:09:07.808 --rc genhtml_function_coverage=1 00:09:07.808 --rc genhtml_legend=1 00:09:07.808 --rc geninfo_all_blocks=1 00:09:07.808 --rc geninfo_unexecuted_blocks=1 00:09:07.808 00:09:07.808 ' 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:07.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.808 --rc genhtml_branch_coverage=1 00:09:07.808 --rc genhtml_function_coverage=1 00:09:07.808 --rc genhtml_legend=1 00:09:07.808 --rc geninfo_all_blocks=1 00:09:07.808 --rc geninfo_unexecuted_blocks=1 00:09:07.808 00:09:07.808 ' 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # : 0 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:09:07.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@56 -- # have_pci_nics=0 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # xtrace_disable 00:09:07.808 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.950 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:15.950 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_devs=() 00:09:15.950 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_devs 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_net_devs=() 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # pci_drivers=() 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # local -A pci_drivers 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # net_devs=() 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga net_devs 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # e810=() 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga e810 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # x722=() 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga x722 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@323 -- # mlx=() 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@323 -- # local -ga mlx 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:15.951 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:15.951 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:15.951 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:15.951 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:15.951 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:15.951 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:15.951 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:09:15.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:09:15.951 00:09:15.951 --- 10.0.0.2 ping statistics --- 00:09:15.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.951 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:09:15.951 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:15.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.361 ms 00:09:15.951 00:09:15.951 --- 10.0.0.1 ping statistics --- 00:09:15.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.951 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:09:15.951 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.951 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:09:15.951 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:15.951 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.951 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:15.951 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:15.951 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.951 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:15.951 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:15.952 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:15.952 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:15.952 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:15.952 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.952 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=4092112 00:09:15.952 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 4092112 00:09:15.952 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:15.952 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 4092112 ']' 00:09:15.952 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.952 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.952 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.952 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.952 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.952 [2024-12-09 11:44:23.140837] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:09:15.952 [2024-12-09 11:44:23.140904] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.952 [2024-12-09 11:44:23.239117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.952 [2024-12-09 11:44:23.293237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.952 [2024-12-09 11:44:23.293296] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.952 [2024-12-09 11:44:23.293304] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.952 [2024-12-09 11:44:23.293311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.952 [2024-12-09 11:44:23.293317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.952 [2024-12-09 11:44:23.295370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.952 [2024-12-09 11:44:23.295504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.952 [2024-12-09 11:44:23.295689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.952 [2024-12-09 11:44:23.295689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.212 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.212 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:16.212 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:16.212 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:16.212 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.212 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.212 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:16.212 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.212 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.212 [2024-12-09 11:44:24.005326] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.212 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.212 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.213 Malloc0 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.213 [2024-12-09 11:44:24.057981] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:16.213 test case1: single bdev can't be used in multiple subsystems 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.213 [2024-12-09 11:44:24.081858] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:16.213 [2024-12-09 11:44:24.081877] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:16.213 [2024-12-09 11:44:24.081885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.213 request: 00:09:16.213 { 00:09:16.213 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:16.213 "namespace": { 00:09:16.213 "bdev_name": "Malloc0", 00:09:16.213 "no_auto_visible": false, 00:09:16.213 "hide_metadata": false 00:09:16.213 }, 00:09:16.213 "method": "nvmf_subsystem_add_ns", 00:09:16.213 "req_id": 1 00:09:16.213 } 00:09:16.213 Got JSON-RPC error response 00:09:16.213 response: 00:09:16.213 { 00:09:16.213 "code": -32602, 00:09:16.213 "message": "Invalid parameters" 00:09:16.213 } 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:16.213 Adding namespace failed - expected result. 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:16.213 test case2: host connect to nvmf target in multiple paths 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.213 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.213 [2024-12-09 11:44:24.093997] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:16.474 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.474 11:44:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:17.859 11:44:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:19.771 11:44:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:19.771 11:44:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:19.771 11:44:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:19.771 11:44:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:19.771 11:44:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:21.680 11:44:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:21.680 11:44:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:21.680 11:44:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:21.680 11:44:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:21.680 11:44:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:21.680 11:44:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:21.680 11:44:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:21.680 [global] 00:09:21.680 thread=1 00:09:21.680 invalidate=1 00:09:21.680 rw=write 00:09:21.680 time_based=1 00:09:21.680 runtime=1 00:09:21.680 ioengine=libaio 00:09:21.680 direct=1 00:09:21.680 bs=4096 00:09:21.680 iodepth=1 00:09:21.680 norandommap=0 00:09:21.680 numjobs=1 00:09:21.680 00:09:21.680 verify_dump=1 00:09:21.680 verify_backlog=512 00:09:21.680 verify_state_save=0 00:09:21.680 do_verify=1 00:09:21.680 verify=crc32c-intel 00:09:21.680 [job0] 00:09:21.680 filename=/dev/nvme0n1 00:09:21.680 Could not set queue depth (nvme0n1) 00:09:21.939 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:21.939 fio-3.35 00:09:21.939 Starting 1 thread 00:09:22.880 00:09:22.880 job0: (groupid=0, jobs=1): err= 0: pid=4093547: Mon Dec 9 11:44:30 2024 00:09:22.880 read: IOPS=17, BW=71.8KiB/s (73.5kB/s)(72.0KiB/1003msec) 00:09:22.880 slat (nsec): min=26231, max=27484, avg=26591.22, stdev=327.37 00:09:22.880 clat (usec): min=40895, max=42101, avg=41504.90, stdev=502.73 00:09:22.880 lat (usec): min=40922, max=42128, avg=41531.49, stdev=502.85 00:09:22.880 clat percentiles (usec): 00:09:22.880 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:22.880 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:09:22.880 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:22.880 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:22.880 | 99.99th=[42206] 00:09:22.880 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:09:22.880 slat (nsec): min=9202, max=66888, avg=29638.03, stdev=9979.53 00:09:22.880 clat (usec): min=126, max=868, avg=462.69, stdev=134.81 00:09:22.880 lat (usec): min=137, max=914, avg=492.33, stdev=137.18 00:09:22.880 clat percentiles (usec): 00:09:22.880 | 1.00th=[ 204], 5.00th=[ 235], 10.00th=[ 289], 20.00th=[ 314], 00:09:22.880 | 30.00th=[ 396], 40.00th=[ 424], 50.00th=[ 461], 60.00th=[ 519], 00:09:22.880 | 70.00th=[ 545], 80.00th=[ 578], 90.00th=[ 644], 95.00th=[ 668], 00:09:22.880 | 99.00th=[ 717], 99.50th=[ 742], 99.90th=[ 865], 99.95th=[ 865], 00:09:22.880 | 99.99th=[ 865] 00:09:22.880 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:22.880 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:22.880 lat (usec) : 250=6.79%, 500=47.55%, 750=42.08%, 1000=0.19% 00:09:22.880 lat (msec) : 50=3.40% 00:09:22.880 cpu : usr=1.40%, sys=1.60%, ctx=530, majf=0, minf=1 00:09:22.880 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:22.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.880 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.880 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:22.880 00:09:22.880 Run status group 0 (all jobs): 00:09:22.880 READ: bw=71.8KiB/s (73.5kB/s), 71.8KiB/s-71.8KiB/s (73.5kB/s-73.5kB/s), io=72.0KiB (73.7kB), run=1003-1003msec 00:09:22.880 WRITE: bw=2042KiB/s (2091kB/s), 2042KiB/s-2042KiB/s (2091kB/s-2091kB/s), io=2048KiB (2097kB), run=1003-1003msec 00:09:22.880 00:09:22.880 Disk stats (read/write): 00:09:22.880 nvme0n1: ios=65/512, merge=0/0, ticks=1010/186, in_queue=1196, util=97.49% 00:09:22.881 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:23.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # sync 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # set +e 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # for i in {1..20} 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:09:23.141 rmmod nvme_tcp 00:09:23.141 rmmod nvme_fabrics 00:09:23.141 rmmod nvme_keyring 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # set -e 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@130 -- # return 0 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 4092112 ']' 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 4092112 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 4092112 ']' 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 4092112 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:23.141 11:44:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4092112 00:09:23.402 11:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:23.402 11:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:23.403 11:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4092112' 00:09:23.403 killing process with pid 4092112 00:09:23.403 11:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 4092112 00:09:23.403 11:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 4092112 00:09:23.403 11:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:23.403 11:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:23.403 11:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:23.403 11:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # iptr 00:09:23.403 11:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:23.403 11:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:09:23.403 11:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:09:23.403 11:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:23.403 11:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # remove_spdk_ns 00:09:23.403 11:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.403 11:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.403 11:44:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.949 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:09:25.949 00:09:25.949 real 0m17.852s 00:09:25.949 user 0m46.512s 00:09:25.949 sys 0m6.482s 00:09:25.949 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.949 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.949 ************************************ 00:09:25.949 END TEST nvmf_nmic 00:09:25.949 ************************************ 00:09:25.949 11:44:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:25.949 11:44:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:25.949 11:44:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.949 11:44:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.949 ************************************ 00:09:25.949 START TEST nvmf_fio_target 00:09:25.949 ************************************ 00:09:25.949 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:25.949 * Looking for test storage... 00:09:25.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.949 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:25.949 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:25.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.950 --rc genhtml_branch_coverage=1 00:09:25.950 --rc genhtml_function_coverage=1 00:09:25.950 --rc genhtml_legend=1 00:09:25.950 --rc geninfo_all_blocks=1 00:09:25.950 --rc geninfo_unexecuted_blocks=1 00:09:25.950 00:09:25.950 ' 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:25.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.950 --rc genhtml_branch_coverage=1 00:09:25.950 --rc genhtml_function_coverage=1 00:09:25.950 --rc genhtml_legend=1 00:09:25.950 --rc geninfo_all_blocks=1 00:09:25.950 --rc geninfo_unexecuted_blocks=1 00:09:25.950 00:09:25.950 ' 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:25.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.950 --rc genhtml_branch_coverage=1 00:09:25.950 --rc genhtml_function_coverage=1 00:09:25.950 --rc genhtml_legend=1 00:09:25.950 --rc geninfo_all_blocks=1 00:09:25.950 --rc geninfo_unexecuted_blocks=1 00:09:25.950 00:09:25.950 ' 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:25.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.950 --rc genhtml_branch_coverage=1 00:09:25.950 --rc genhtml_function_coverage=1 00:09:25.950 --rc genhtml_legend=1 00:09:25.950 --rc geninfo_all_blocks=1 00:09:25.950 --rc geninfo_unexecuted_blocks=1 00:09:25.950 00:09:25.950 ' 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # : 0 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:09:25.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@56 -- # have_pci_nics=0 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:25.950 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.951 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:25.951 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:25.951 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:25.951 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.951 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.951 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.951 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:25.951 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:25.951 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # xtrace_disable 00:09:25.951 11:44:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.094 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.094 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_devs=() 00:09:34.094 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_devs 00:09:34.094 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_net_devs=() 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # pci_drivers=() 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # local -A pci_drivers 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # net_devs=() 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga net_devs 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # e810=() 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga e810 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # x722=() 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga x722 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@323 -- # mlx=() 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@323 -- # local -ga mlx 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:34.095 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:34.095 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:34.095 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:34.095 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:09:34.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:09:34.095 00:09:34.095 --- 10.0.0.2 ping statistics --- 00:09:34.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.095 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:34.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:09:34.095 00:09:34.095 --- 10.0.0.1 ping statistics --- 00:09:34.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.095 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:34.095 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.096 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:34.096 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:34.096 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:34.096 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:34.096 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:34.096 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.096 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=4098173 00:09:34.096 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 4098173 00:09:34.096 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:34.096 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 4098173 ']' 00:09:34.096 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.096 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.096 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.096 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.096 11:44:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.096 [2024-12-09 11:44:40.987317] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:09:34.096 [2024-12-09 11:44:40.987387] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.096 [2024-12-09 11:44:41.085140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.096 [2024-12-09 11:44:41.138582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.096 [2024-12-09 11:44:41.138647] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.096 [2024-12-09 11:44:41.138656] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.096 [2024-12-09 11:44:41.138663] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.096 [2024-12-09 11:44:41.138669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.096 [2024-12-09 11:44:41.140694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.096 [2024-12-09 11:44:41.140775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.096 [2024-12-09 11:44:41.141112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.096 [2024-12-09 11:44:41.141114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.096 11:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.096 11:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:34.096 11:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:34.096 11:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:34.096 11:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.096 11:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.096 11:44:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:34.357 [2024-12-09 11:44:41.983193] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.357 11:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.357 11:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:34.357 11:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.617 11:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:34.617 11:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.877 11:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:34.877 11:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.137 11:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:35.137 11:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:35.137 11:44:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.398 11:44:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:35.398 11:44:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.658 11:44:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:35.658 11:44:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.658 11:44:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:35.658 11:44:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:35.918 11:44:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:36.180 11:44:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:36.180 11:44:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:36.440 11:44:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:36.440 11:44:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:36.440 11:44:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.700 [2024-12-09 11:44:44.401567] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.700 11:44:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:36.960 11:44:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:36.961 11:44:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:38.876 11:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:38.876 11:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:38.876 11:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:38.876 11:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:38.876 11:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:38.876 11:44:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:40.812 11:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:40.812 11:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:40.812 11:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:40.812 11:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:40.812 11:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:40.812 11:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:40.812 11:44:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:40.812 [global] 00:09:40.812 thread=1 00:09:40.812 invalidate=1 00:09:40.812 rw=write 00:09:40.812 time_based=1 00:09:40.812 runtime=1 00:09:40.812 ioengine=libaio 00:09:40.812 direct=1 00:09:40.812 bs=4096 00:09:40.812 iodepth=1 00:09:40.812 norandommap=0 00:09:40.812 numjobs=1 00:09:40.812 00:09:40.812 verify_dump=1 00:09:40.812 verify_backlog=512 00:09:40.812 verify_state_save=0 00:09:40.812 do_verify=1 00:09:40.812 verify=crc32c-intel 00:09:40.812 [job0] 00:09:40.812 filename=/dev/nvme0n1 00:09:40.812 [job1] 00:09:40.812 filename=/dev/nvme0n2 00:09:40.812 [job2] 00:09:40.812 filename=/dev/nvme0n3 00:09:40.812 [job3] 00:09:40.812 filename=/dev/nvme0n4 00:09:40.812 Could not set queue depth (nvme0n1) 00:09:40.812 Could not set queue depth (nvme0n2) 00:09:40.812 Could not set queue depth (nvme0n3) 00:09:40.812 Could not set queue depth (nvme0n4) 00:09:41.074 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.074 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.074 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.074 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.074 fio-3.35 00:09:41.074 Starting 4 threads 00:09:42.460 00:09:42.460 job0: (groupid=0, jobs=1): err= 0: pid=4099820: Mon Dec 9 11:44:50 2024 00:09:42.460 read: IOPS=18, BW=73.2KiB/s (75.0kB/s)(76.0KiB/1038msec) 00:09:42.460 slat (nsec): min=9912, max=27525, avg=26096.74, stdev=3925.18 00:09:42.460 clat (usec): min=40934, max=42111, avg=41651.73, stdev=470.19 00:09:42.460 lat (usec): min=40962, max=42138, avg=41677.82, stdev=471.41 00:09:42.460 clat percentiles (usec): 00:09:42.460 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:42.460 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:09:42.460 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:42.460 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:42.460 | 99.99th=[42206] 00:09:42.460 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:09:42.460 slat (nsec): min=8925, max=66143, avg=28874.39, stdev=10848.17 00:09:42.460 clat (usec): min=227, max=815, avg=445.04, stdev=100.36 00:09:42.461 lat (usec): min=265, max=848, avg=473.92, stdev=105.49 00:09:42.461 clat percentiles (usec): 00:09:42.461 | 1.00th=[ 260], 5.00th=[ 289], 10.00th=[ 322], 20.00th=[ 355], 00:09:42.461 | 30.00th=[ 388], 40.00th=[ 424], 50.00th=[ 449], 60.00th=[ 469], 00:09:42.461 | 70.00th=[ 490], 80.00th=[ 506], 90.00th=[ 553], 95.00th=[ 635], 00:09:42.461 | 99.00th=[ 783], 99.50th=[ 799], 99.90th=[ 816], 99.95th=[ 816], 00:09:42.461 | 99.99th=[ 816] 00:09:42.461 bw ( KiB/s): min= 4096, max= 4096, per=44.60%, avg=4096.00, stdev= 0.00, samples=1 00:09:42.461 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:42.461 lat (usec) : 250=0.56%, 500=73.63%, 750=20.90%, 1000=1.32% 00:09:42.461 lat (msec) : 50=3.58% 00:09:42.461 cpu : usr=1.25%, sys=1.54%, ctx=531, majf=0, minf=1 00:09:42.461 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.461 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.461 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.461 job1: (groupid=0, jobs=1): err= 0: pid=4099825: Mon Dec 9 11:44:50 2024 00:09:42.461 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:42.461 slat (nsec): min=6696, max=47141, avg=26830.11, stdev=2281.66 00:09:42.461 clat (usec): min=494, max=1270, avg=972.09, stdev=110.58 00:09:42.461 lat (usec): min=522, max=1297, avg=998.92, stdev=110.65 00:09:42.461 clat percentiles (usec): 00:09:42.461 | 1.00th=[ 652], 5.00th=[ 758], 10.00th=[ 807], 20.00th=[ 906], 00:09:42.461 | 30.00th=[ 947], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1012], 00:09:42.461 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1106], 00:09:42.461 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1270], 99.95th=[ 1270], 00:09:42.461 | 99.99th=[ 1270] 00:09:42.461 write: IOPS=701, BW=2805KiB/s (2873kB/s)(2808KiB/1001msec); 0 zone resets 00:09:42.461 slat (nsec): min=9225, max=53900, avg=31741.56, stdev=8512.12 00:09:42.461 clat (usec): min=207, max=979, avg=650.56, stdev=133.69 00:09:42.461 lat (usec): min=245, max=1012, avg=682.30, stdev=135.86 00:09:42.461 clat percentiles (usec): 00:09:42.461 | 1.00th=[ 293], 5.00th=[ 412], 10.00th=[ 478], 20.00th=[ 545], 00:09:42.461 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 668], 60.00th=[ 701], 00:09:42.461 | 70.00th=[ 734], 80.00th=[ 775], 90.00th=[ 816], 95.00th=[ 848], 00:09:42.461 | 99.00th=[ 906], 99.50th=[ 930], 99.90th=[ 979], 99.95th=[ 979], 00:09:42.461 | 99.99th=[ 979] 00:09:42.461 bw ( KiB/s): min= 4096, max= 4096, per=44.60%, avg=4096.00, stdev= 0.00, samples=1 00:09:42.461 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:42.461 lat (usec) : 250=0.16%, 500=6.92%, 750=37.81%, 1000=35.09% 00:09:42.461 lat (msec) : 2=20.02% 00:09:42.461 cpu : usr=2.50%, sys=4.90%, ctx=1214, majf=0, minf=1 00:09:42.461 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.461 issued rwts: total=512,702,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.461 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.461 job2: (groupid=0, jobs=1): err= 0: pid=4099827: Mon Dec 9 11:44:50 2024 00:09:42.461 read: IOPS=254, BW=1019KiB/s (1044kB/s)(1060KiB/1040msec) 00:09:42.461 slat (nsec): min=25941, max=45401, avg=26977.62, stdev=2382.18 00:09:42.461 clat (usec): min=871, max=41973, avg=2640.06, stdev=7631.21 00:09:42.461 lat (usec): min=898, max=41999, avg=2667.04, stdev=7631.14 00:09:42.461 clat percentiles (usec): 00:09:42.461 | 1.00th=[ 898], 5.00th=[ 1004], 10.00th=[ 1045], 20.00th=[ 1090], 00:09:42.461 | 30.00th=[ 1106], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1156], 00:09:42.461 | 70.00th=[ 1172], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1270], 00:09:42.461 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:42.461 | 99.99th=[42206] 00:09:42.461 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:09:42.461 slat (nsec): min=10144, max=71831, avg=31612.40, stdev=9263.59 00:09:42.461 clat (usec): min=220, max=877, avg=605.71, stdev=114.16 00:09:42.461 lat (usec): min=255, max=929, avg=637.32, stdev=118.09 00:09:42.461 clat percentiles (usec): 00:09:42.461 | 1.00th=[ 343], 5.00th=[ 388], 10.00th=[ 453], 20.00th=[ 502], 00:09:42.461 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 644], 00:09:42.461 | 70.00th=[ 676], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 766], 00:09:42.461 | 99.00th=[ 832], 99.50th=[ 857], 99.90th=[ 881], 99.95th=[ 881], 00:09:42.461 | 99.99th=[ 881] 00:09:42.461 bw ( KiB/s): min= 4096, max= 4096, per=44.60%, avg=4096.00, stdev= 0.00, samples=1 00:09:42.461 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:42.461 lat (usec) : 250=0.13%, 500=13.13%, 750=47.10%, 1000=7.08% 00:09:42.461 lat (msec) : 2=31.27%, 50=1.29% 00:09:42.461 cpu : usr=1.35%, sys=2.02%, ctx=778, majf=0, minf=1 00:09:42.461 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.461 issued rwts: total=265,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.461 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.461 job3: (groupid=0, jobs=1): err= 0: pid=4099828: Mon Dec 9 11:44:50 2024 00:09:42.461 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:42.461 slat (nsec): min=8285, max=63163, avg=27576.56, stdev=3506.77 00:09:42.461 clat (usec): min=742, max=1371, avg=1109.36, stdev=98.78 00:09:42.461 lat (usec): min=770, max=1397, avg=1136.94, stdev=98.78 00:09:42.461 clat percentiles (usec): 00:09:42.461 | 1.00th=[ 791], 5.00th=[ 922], 10.00th=[ 988], 20.00th=[ 1037], 00:09:42.461 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:09:42.461 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1237], 00:09:42.461 | 99.00th=[ 1287], 99.50th=[ 1303], 99.90th=[ 1369], 99.95th=[ 1369], 00:09:42.461 | 99.99th=[ 1369] 00:09:42.461 write: IOPS=661, BW=2645KiB/s (2709kB/s)(2648KiB/1001msec); 0 zone resets 00:09:42.461 slat (nsec): min=10202, max=65754, avg=31582.40, stdev=10176.59 00:09:42.461 clat (usec): min=219, max=824, avg=586.43, stdev=112.95 00:09:42.461 lat (usec): min=231, max=871, avg=618.01, stdev=117.49 00:09:42.461 clat percentiles (usec): 00:09:42.461 | 1.00th=[ 273], 5.00th=[ 371], 10.00th=[ 437], 20.00th=[ 490], 00:09:42.461 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 594], 60.00th=[ 619], 00:09:42.461 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 717], 95.00th=[ 742], 00:09:42.461 | 99.00th=[ 807], 99.50th=[ 824], 99.90th=[ 824], 99.95th=[ 824], 00:09:42.461 | 99.99th=[ 824] 00:09:42.461 bw ( KiB/s): min= 4096, max= 4096, per=44.60%, avg=4096.00, stdev= 0.00, samples=1 00:09:42.461 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:42.461 lat (usec) : 250=0.17%, 500=12.35%, 750=41.57%, 1000=8.26% 00:09:42.461 lat (msec) : 2=37.65% 00:09:42.461 cpu : usr=1.80%, sys=3.60%, ctx=1175, majf=0, minf=1 00:09:42.461 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.461 issued rwts: total=512,662,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.461 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.461 00:09:42.461 Run status group 0 (all jobs): 00:09:42.461 READ: bw=5031KiB/s (5152kB/s), 73.2KiB/s-2046KiB/s (75.0kB/s-2095kB/s), io=5232KiB (5358kB), run=1001-1040msec 00:09:42.461 WRITE: bw=9185KiB/s (9405kB/s), 1969KiB/s-2805KiB/s (2016kB/s-2873kB/s), io=9552KiB (9781kB), run=1001-1040msec 00:09:42.461 00:09:42.461 Disk stats (read/write): 00:09:42.461 nvme0n1: ios=64/512, merge=0/0, ticks=858/173, in_queue=1031, util=91.48% 00:09:42.461 nvme0n2: ios=501/512, merge=0/0, ticks=467/263, in_queue=730, util=86.28% 00:09:42.461 nvme0n3: ios=281/512, merge=0/0, ticks=1375/301, in_queue=1676, util=96.71% 00:09:42.461 nvme0n4: ios=467/512, merge=0/0, ticks=1410/291, in_queue=1701, util=96.67% 00:09:42.461 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:42.461 [global] 00:09:42.461 thread=1 00:09:42.461 invalidate=1 00:09:42.461 rw=randwrite 00:09:42.461 time_based=1 00:09:42.461 runtime=1 00:09:42.461 ioengine=libaio 00:09:42.461 direct=1 00:09:42.461 bs=4096 00:09:42.461 iodepth=1 00:09:42.461 norandommap=0 00:09:42.461 numjobs=1 00:09:42.461 00:09:42.461 verify_dump=1 00:09:42.461 verify_backlog=512 00:09:42.461 verify_state_save=0 00:09:42.461 do_verify=1 00:09:42.461 verify=crc32c-intel 00:09:42.461 [job0] 00:09:42.461 filename=/dev/nvme0n1 00:09:42.461 [job1] 00:09:42.461 filename=/dev/nvme0n2 00:09:42.461 [job2] 00:09:42.461 filename=/dev/nvme0n3 00:09:42.461 [job3] 00:09:42.461 filename=/dev/nvme0n4 00:09:42.461 Could not set queue depth (nvme0n1) 00:09:42.461 Could not set queue depth (nvme0n2) 00:09:42.461 Could not set queue depth (nvme0n3) 00:09:42.461 Could not set queue depth (nvme0n4) 00:09:42.723 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.723 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.723 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.723 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.723 fio-3.35 00:09:42.723 Starting 4 threads 00:09:44.111 00:09:44.111 job0: (groupid=0, jobs=1): err= 0: pid=4100346: Mon Dec 9 11:44:51 2024 00:09:44.111 read: IOPS=653, BW=2613KiB/s (2676kB/s)(2616KiB/1001msec) 00:09:44.111 slat (nsec): min=7195, max=47395, avg=23736.95, stdev=8436.75 00:09:44.111 clat (usec): min=335, max=950, avg=767.73, stdev=70.85 00:09:44.111 lat (usec): min=343, max=977, avg=791.47, stdev=73.15 00:09:44.111 clat percentiles (usec): 00:09:44.111 | 1.00th=[ 553], 5.00th=[ 652], 10.00th=[ 676], 20.00th=[ 709], 00:09:44.111 | 30.00th=[ 750], 40.00th=[ 766], 50.00th=[ 783], 60.00th=[ 791], 00:09:44.111 | 70.00th=[ 807], 80.00th=[ 816], 90.00th=[ 840], 95.00th=[ 865], 00:09:44.111 | 99.00th=[ 914], 99.50th=[ 922], 99.90th=[ 947], 99.95th=[ 947], 00:09:44.111 | 99.99th=[ 947] 00:09:44.111 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:44.111 slat (nsec): min=6433, max=64857, avg=23987.03, stdev=12686.30 00:09:44.111 clat (usec): min=145, max=765, avg=435.74, stdev=87.48 00:09:44.111 lat (usec): min=180, max=774, avg=459.73, stdev=91.04 00:09:44.111 clat percentiles (usec): 00:09:44.111 | 1.00th=[ 245], 5.00th=[ 277], 10.00th=[ 310], 20.00th=[ 351], 00:09:44.111 | 30.00th=[ 396], 40.00th=[ 433], 50.00th=[ 453], 60.00th=[ 469], 00:09:44.111 | 70.00th=[ 486], 80.00th=[ 502], 90.00th=[ 529], 95.00th=[ 562], 00:09:44.111 | 99.00th=[ 635], 99.50th=[ 652], 99.90th=[ 709], 99.95th=[ 766], 00:09:44.111 | 99.99th=[ 766] 00:09:44.111 bw ( KiB/s): min= 4096, max= 4096, per=29.77%, avg=4096.00, stdev= 0.00, samples=1 00:09:44.111 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:44.111 lat (usec) : 250=0.66%, 500=47.26%, 750=24.97%, 1000=27.12% 00:09:44.111 cpu : usr=1.60%, sys=4.70%, ctx=1682, majf=0, minf=1 00:09:44.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.111 issued rwts: total=654,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.111 job1: (groupid=0, jobs=1): err= 0: pid=4100347: Mon Dec 9 11:44:51 2024 00:09:44.111 read: IOPS=666, BW=2665KiB/s (2729kB/s)(2668KiB/1001msec) 00:09:44.111 slat (nsec): min=6969, max=59734, avg=22873.64, stdev=8180.52 00:09:44.111 clat (usec): min=417, max=2781, avg=771.57, stdev=109.00 00:09:44.111 lat (usec): min=443, max=2789, avg=794.44, stdev=109.77 00:09:44.111 clat percentiles (usec): 00:09:44.111 | 1.00th=[ 537], 5.00th=[ 635], 10.00th=[ 668], 20.00th=[ 701], 00:09:44.111 | 30.00th=[ 742], 40.00th=[ 775], 50.00th=[ 791], 60.00th=[ 799], 00:09:44.111 | 70.00th=[ 816], 80.00th=[ 832], 90.00th=[ 848], 95.00th=[ 865], 00:09:44.111 | 99.00th=[ 914], 99.50th=[ 922], 99.90th=[ 2769], 99.95th=[ 2769], 00:09:44.111 | 99.99th=[ 2769] 00:09:44.111 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:44.111 slat (nsec): min=9410, max=50117, avg=26756.02, stdev=9896.90 00:09:44.111 clat (usec): min=178, max=675, avg=420.96, stdev=83.75 00:09:44.111 lat (usec): min=196, max=707, avg=447.72, stdev=88.50 00:09:44.111 clat percentiles (usec): 00:09:44.111 | 1.00th=[ 239], 5.00th=[ 277], 10.00th=[ 297], 20.00th=[ 343], 00:09:44.111 | 30.00th=[ 375], 40.00th=[ 412], 50.00th=[ 433], 60.00th=[ 453], 00:09:44.111 | 70.00th=[ 474], 80.00th=[ 490], 90.00th=[ 515], 95.00th=[ 545], 00:09:44.111 | 99.00th=[ 611], 99.50th=[ 627], 99.90th=[ 668], 99.95th=[ 676], 00:09:44.111 | 99.99th=[ 676] 00:09:44.111 bw ( KiB/s): min= 4096, max= 4096, per=29.77%, avg=4096.00, stdev= 0.00, samples=1 00:09:44.111 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:44.111 lat (usec) : 250=0.89%, 500=51.09%, 750=21.41%, 1000=26.55% 00:09:44.111 lat (msec) : 4=0.06% 00:09:44.111 cpu : usr=2.10%, sys=4.70%, ctx=1691, majf=0, minf=1 00:09:44.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.111 issued rwts: total=667,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.111 job2: (groupid=0, jobs=1): err= 0: pid=4100348: Mon Dec 9 11:44:51 2024 00:09:44.111 read: IOPS=702, BW=2809KiB/s (2877kB/s)(2812KiB/1001msec) 00:09:44.111 slat (nsec): min=7058, max=45704, avg=23540.34, stdev=7419.69 00:09:44.111 clat (usec): min=275, max=1070, avg=726.67, stdev=84.01 00:09:44.111 lat (usec): min=301, max=1096, avg=750.21, stdev=85.47 00:09:44.111 clat percentiles (usec): 00:09:44.111 | 1.00th=[ 486], 5.00th=[ 570], 10.00th=[ 627], 20.00th=[ 660], 00:09:44.111 | 30.00th=[ 709], 40.00th=[ 725], 50.00th=[ 742], 60.00th=[ 758], 00:09:44.111 | 70.00th=[ 766], 80.00th=[ 783], 90.00th=[ 807], 95.00th=[ 832], 00:09:44.111 | 99.00th=[ 922], 99.50th=[ 955], 99.90th=[ 1074], 99.95th=[ 1074], 00:09:44.111 | 99.99th=[ 1074] 00:09:44.111 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:44.111 slat (nsec): min=9488, max=53776, avg=27082.00, stdev=10074.05 00:09:44.111 clat (usec): min=132, max=678, avg=423.07, stdev=89.50 00:09:44.111 lat (usec): min=143, max=711, avg=450.15, stdev=95.35 00:09:44.111 clat percentiles (usec): 00:09:44.111 | 1.00th=[ 231], 5.00th=[ 269], 10.00th=[ 285], 20.00th=[ 334], 00:09:44.111 | 30.00th=[ 371], 40.00th=[ 420], 50.00th=[ 445], 60.00th=[ 465], 00:09:44.111 | 70.00th=[ 482], 80.00th=[ 498], 90.00th=[ 529], 95.00th=[ 545], 00:09:44.111 | 99.00th=[ 586], 99.50th=[ 603], 99.90th=[ 635], 99.95th=[ 676], 00:09:44.111 | 99.99th=[ 676] 00:09:44.111 bw ( KiB/s): min= 4096, max= 4096, per=29.77%, avg=4096.00, stdev= 0.00, samples=1 00:09:44.111 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:44.111 lat (usec) : 250=1.27%, 500=46.96%, 750=34.05%, 1000=17.54% 00:09:44.111 lat (msec) : 2=0.17% 00:09:44.111 cpu : usr=2.20%, sys=4.80%, ctx=1727, majf=0, minf=2 00:09:44.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.112 issued rwts: total=703,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.112 job3: (groupid=0, jobs=1): err= 0: pid=4100349: Mon Dec 9 11:44:51 2024 00:09:44.112 read: IOPS=19, BW=76.8KiB/s (78.6kB/s)(80.0KiB/1042msec) 00:09:44.112 slat (nsec): min=26633, max=27159, avg=26791.00, stdev=133.84 00:09:44.112 clat (usec): min=40913, max=42002, avg=41811.26, stdev=374.12 00:09:44.112 lat (usec): min=40940, max=42029, avg=41838.05, stdev=374.09 00:09:44.112 clat percentiles (usec): 00:09:44.112 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:09:44.112 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:09:44.112 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:44.112 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:44.112 | 99.99th=[42206] 00:09:44.112 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:09:44.112 slat (nsec): min=6152, max=51433, avg=18030.43, stdev=12002.35 00:09:44.112 clat (usec): min=108, max=842, avg=378.05, stdev=115.29 00:09:44.112 lat (usec): min=118, max=849, avg=396.08, stdev=113.09 00:09:44.112 clat percentiles (usec): 00:09:44.112 | 1.00th=[ 121], 5.00th=[ 235], 10.00th=[ 255], 20.00th=[ 281], 00:09:44.112 | 30.00th=[ 302], 40.00th=[ 347], 50.00th=[ 367], 60.00th=[ 388], 00:09:44.112 | 70.00th=[ 424], 80.00th=[ 478], 90.00th=[ 529], 95.00th=[ 586], 00:09:44.112 | 99.00th=[ 676], 99.50th=[ 750], 99.90th=[ 840], 99.95th=[ 840], 00:09:44.112 | 99.99th=[ 840] 00:09:44.112 bw ( KiB/s): min= 4096, max= 4096, per=29.77%, avg=4096.00, stdev= 0.00, samples=1 00:09:44.112 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:44.112 lat (usec) : 250=7.71%, 500=73.31%, 750=14.66%, 1000=0.56% 00:09:44.112 lat (msec) : 50=3.76% 00:09:44.112 cpu : usr=0.77%, sys=0.67%, ctx=532, majf=0, minf=2 00:09:44.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.112 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.112 00:09:44.112 Run status group 0 (all jobs): 00:09:44.112 READ: bw=7846KiB/s (8035kB/s), 76.8KiB/s-2809KiB/s (78.6kB/s-2877kB/s), io=8176KiB (8372kB), run=1001-1042msec 00:09:44.112 WRITE: bw=13.4MiB/s (14.1MB/s), 1965KiB/s-4092KiB/s (2013kB/s-4190kB/s), io=14.0MiB (14.7MB), run=1001-1042msec 00:09:44.112 00:09:44.112 Disk stats (read/write): 00:09:44.112 nvme0n1: ios=540/815, merge=0/0, ticks=1329/339, in_queue=1668, util=96.59% 00:09:44.112 nvme0n2: ios=548/828, merge=0/0, ticks=513/331, in_queue=844, util=91.43% 00:09:44.112 nvme0n3: ios=512/875, merge=0/0, ticks=371/358, in_queue=729, util=86.91% 00:09:44.112 nvme0n4: ios=19/512, merge=0/0, ticks=795/191, in_queue=986, util=91.38% 00:09:44.112 11:44:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:44.112 [global] 00:09:44.112 thread=1 00:09:44.112 invalidate=1 00:09:44.112 rw=write 00:09:44.112 time_based=1 00:09:44.112 runtime=1 00:09:44.112 ioengine=libaio 00:09:44.112 direct=1 00:09:44.112 bs=4096 00:09:44.112 iodepth=128 00:09:44.112 norandommap=0 00:09:44.112 numjobs=1 00:09:44.112 00:09:44.112 verify_dump=1 00:09:44.112 verify_backlog=512 00:09:44.112 verify_state_save=0 00:09:44.112 do_verify=1 00:09:44.112 verify=crc32c-intel 00:09:44.112 [job0] 00:09:44.112 filename=/dev/nvme0n1 00:09:44.112 [job1] 00:09:44.112 filename=/dev/nvme0n2 00:09:44.112 [job2] 00:09:44.112 filename=/dev/nvme0n3 00:09:44.112 [job3] 00:09:44.112 filename=/dev/nvme0n4 00:09:44.112 Could not set queue depth (nvme0n1) 00:09:44.112 Could not set queue depth (nvme0n2) 00:09:44.112 Could not set queue depth (nvme0n3) 00:09:44.112 Could not set queue depth (nvme0n4) 00:09:44.373 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.373 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.373 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.373 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.373 fio-3.35 00:09:44.373 Starting 4 threads 00:09:45.760 00:09:45.760 job0: (groupid=0, jobs=1): err= 0: pid=4100876: Mon Dec 9 11:44:53 2024 00:09:45.760 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:09:45.760 slat (nsec): min=942, max=11915k, avg=89349.50, stdev=590482.95 00:09:45.760 clat (usec): min=3439, max=38194, avg=11104.02, stdev=4586.90 00:09:45.760 lat (usec): min=3447, max=38224, avg=11193.37, stdev=4635.61 00:09:45.760 clat percentiles (usec): 00:09:45.760 | 1.00th=[ 4359], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 7832], 00:09:45.760 | 30.00th=[ 8029], 40.00th=[ 8356], 50.00th=[10028], 60.00th=[11338], 00:09:45.760 | 70.00th=[12780], 80.00th=[13566], 90.00th=[16909], 95.00th=[19268], 00:09:45.760 | 99.00th=[28705], 99.50th=[30016], 99.90th=[30016], 99.95th=[30016], 00:09:45.760 | 99.99th=[38011] 00:09:45.760 write: IOPS=5164, BW=20.2MiB/s (21.2MB/s)(20.2MiB/1003msec); 0 zone resets 00:09:45.760 slat (nsec): min=1625, max=17088k, avg=94365.10, stdev=777447.45 00:09:45.760 clat (usec): min=509, max=52790, avg=13555.33, stdev=7992.39 00:09:45.760 lat (usec): min=513, max=52822, avg=13649.70, stdev=8059.19 00:09:45.760 clat percentiles (usec): 00:09:45.760 | 1.00th=[ 2278], 5.00th=[ 5342], 10.00th=[ 6521], 20.00th=[ 7635], 00:09:45.760 | 30.00th=[ 8160], 40.00th=[ 9372], 50.00th=[11338], 60.00th=[13960], 00:09:45.760 | 70.00th=[15401], 80.00th=[17695], 90.00th=[22938], 95.00th=[30540], 00:09:45.760 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[51643], 00:09:45.760 | 99.99th=[52691] 00:09:45.760 bw ( KiB/s): min=16384, max=24576, per=22.14%, avg=20480.00, stdev=5792.62, samples=2 00:09:45.760 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:09:45.760 lat (usec) : 750=0.03%, 1000=0.03% 00:09:45.760 lat (msec) : 2=0.37%, 4=0.87%, 10=45.75%, 20=43.57%, 50=9.35% 00:09:45.760 lat (msec) : 100=0.03% 00:09:45.760 cpu : usr=3.39%, sys=5.29%, ctx=414, majf=0, minf=1 00:09:45.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:45.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.760 issued rwts: total=5120,5180,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.760 job1: (groupid=0, jobs=1): err= 0: pid=4100877: Mon Dec 9 11:44:53 2024 00:09:45.760 read: IOPS=7291, BW=28.5MiB/s (29.9MB/s)(28.6MiB/1004msec) 00:09:45.760 slat (nsec): min=1035, max=19311k, avg=62605.89, stdev=542183.66 00:09:45.760 clat (usec): min=1231, max=38877, avg=9162.74, stdev=4544.82 00:09:45.760 lat (usec): min=2185, max=38888, avg=9225.34, stdev=4577.69 00:09:45.760 clat percentiles (usec): 00:09:45.760 | 1.00th=[ 3130], 5.00th=[ 4686], 10.00th=[ 5866], 20.00th=[ 6587], 00:09:45.760 | 30.00th=[ 6980], 40.00th=[ 7439], 50.00th=[ 8094], 60.00th=[ 8717], 00:09:45.760 | 70.00th=[ 9634], 80.00th=[10683], 90.00th=[13173], 95.00th=[16581], 00:09:45.760 | 99.00th=[27132], 99.50th=[38536], 99.90th=[39060], 99.95th=[39060], 00:09:45.760 | 99.99th=[39060] 00:09:45.760 write: IOPS=7649, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1004msec); 0 zone resets 00:09:45.760 slat (nsec): min=1682, max=8064.1k, avg=56716.96, stdev=414659.20 00:09:45.760 clat (usec): min=1267, max=31510, avg=7833.15, stdev=4487.51 00:09:45.760 lat (usec): min=1279, max=31518, avg=7889.87, stdev=4518.67 00:09:45.760 clat percentiles (usec): 00:09:45.761 | 1.00th=[ 2868], 5.00th=[ 4047], 10.00th=[ 4359], 20.00th=[ 4948], 00:09:45.761 | 30.00th=[ 5735], 40.00th=[ 6325], 50.00th=[ 6783], 60.00th=[ 7111], 00:09:45.761 | 70.00th=[ 7767], 80.00th=[ 8848], 90.00th=[12256], 95.00th=[17957], 00:09:45.761 | 99.00th=[26870], 99.50th=[27919], 99.90th=[30802], 99.95th=[31589], 00:09:45.761 | 99.99th=[31589] 00:09:45.761 bw ( KiB/s): min=28672, max=32833, per=33.24%, avg=30752.50, stdev=2942.27, samples=2 00:09:45.761 iops : min= 7168, max= 8208, avg=7688.00, stdev=735.39, samples=2 00:09:45.761 lat (msec) : 2=0.29%, 4=3.45%, 10=76.49%, 20=15.95%, 50=3.82% 00:09:45.761 cpu : usr=6.48%, sys=8.47%, ctx=359, majf=0, minf=1 00:09:45.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:45.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.761 issued rwts: total=7321,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.761 job2: (groupid=0, jobs=1): err= 0: pid=4100878: Mon Dec 9 11:44:53 2024 00:09:45.761 read: IOPS=4246, BW=16.6MiB/s (17.4MB/s)(17.3MiB/1043msec) 00:09:45.761 slat (nsec): min=1008, max=13266k, avg=123559.24, stdev=768809.75 00:09:45.761 clat (usec): min=2997, max=50334, avg=16070.44, stdev=8944.63 00:09:45.761 lat (usec): min=3007, max=52537, avg=16194.00, stdev=8996.32 00:09:45.761 clat percentiles (usec): 00:09:45.761 | 1.00th=[ 5800], 5.00th=[ 7373], 10.00th=[ 8094], 20.00th=[ 8586], 00:09:45.761 | 30.00th=[ 9110], 40.00th=[11863], 50.00th=[13698], 60.00th=[16712], 00:09:45.761 | 70.00th=[19530], 80.00th=[21627], 90.00th=[28705], 95.00th=[31589], 00:09:45.761 | 99.00th=[47449], 99.50th=[49546], 99.90th=[50070], 99.95th=[50070], 00:09:45.761 | 99.99th=[50594] 00:09:45.761 write: IOPS=4418, BW=17.3MiB/s (18.1MB/s)(18.0MiB/1043msec); 0 zone resets 00:09:45.761 slat (nsec): min=1677, max=9627.0k, avg=90754.29, stdev=565227.27 00:09:45.761 clat (usec): min=1358, max=35052, avg=13221.01, stdev=6212.14 00:09:45.761 lat (usec): min=1370, max=35107, avg=13311.77, stdev=6241.89 00:09:45.761 clat percentiles (usec): 00:09:45.761 | 1.00th=[ 3294], 5.00th=[ 5276], 10.00th=[ 6849], 20.00th=[ 7898], 00:09:45.761 | 30.00th=[ 9110], 40.00th=[10028], 50.00th=[11731], 60.00th=[13566], 00:09:45.761 | 70.00th=[15401], 80.00th=[18744], 90.00th=[22414], 95.00th=[25297], 00:09:45.761 | 99.00th=[31589], 99.50th=[31851], 99.90th=[31851], 99.95th=[33424], 00:09:45.761 | 99.99th=[34866] 00:09:45.761 bw ( KiB/s): min=16384, max=20521, per=19.94%, avg=18452.50, stdev=2925.30, samples=2 00:09:45.761 iops : min= 4096, max= 5130, avg=4613.00, stdev=731.15, samples=2 00:09:45.761 lat (msec) : 2=0.20%, 4=0.77%, 10=37.30%, 20=39.52%, 50=22.03% 00:09:45.761 lat (msec) : 100=0.18% 00:09:45.761 cpu : usr=3.74%, sys=4.41%, ctx=421, majf=0, minf=1 00:09:45.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:45.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.761 issued rwts: total=4429,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.761 job3: (groupid=0, jobs=1): err= 0: pid=4100879: Mon Dec 9 11:44:53 2024 00:09:45.761 read: IOPS=6265, BW=24.5MiB/s (25.7MB/s)(25.1MiB/1024msec) 00:09:45.761 slat (nsec): min=945, max=11294k, avg=72690.78, stdev=523074.74 00:09:45.761 clat (usec): min=1847, max=44241, avg=10736.85, stdev=5080.78 00:09:45.761 lat (usec): min=1856, max=50689, avg=10809.54, stdev=5119.68 00:09:45.761 clat percentiles (usec): 00:09:45.761 | 1.00th=[ 3654], 5.00th=[ 5473], 10.00th=[ 6521], 20.00th=[ 7439], 00:09:45.761 | 30.00th=[ 8160], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[10552], 00:09:45.761 | 70.00th=[11600], 80.00th=[13304], 90.00th=[15401], 95.00th=[17957], 00:09:45.761 | 99.00th=[34341], 99.50th=[39584], 99.90th=[44303], 99.95th=[44303], 00:09:45.761 | 99.99th=[44303] 00:09:45.761 write: IOPS=6500, BW=25.4MiB/s (26.6MB/s)(26.0MiB/1024msec); 0 zone resets 00:09:45.761 slat (nsec): min=1622, max=12750k, avg=64601.22, stdev=500051.00 00:09:45.761 clat (usec): min=469, max=35005, avg=9164.07, stdev=3927.83 00:09:45.761 lat (usec): min=531, max=35008, avg=9228.67, stdev=3952.48 00:09:45.761 clat percentiles (usec): 00:09:45.761 | 1.00th=[ 2409], 5.00th=[ 4359], 10.00th=[ 5014], 20.00th=[ 6063], 00:09:45.761 | 30.00th=[ 7177], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 9372], 00:09:45.761 | 70.00th=[10159], 80.00th=[10945], 90.00th=[13435], 95.00th=[16057], 00:09:45.761 | 99.00th=[21890], 99.50th=[24249], 99.90th=[34866], 99.95th=[34866], 00:09:45.761 | 99.99th=[34866] 00:09:45.761 bw ( KiB/s): min=24625, max=28672, per=28.80%, avg=26648.50, stdev=2861.66, samples=2 00:09:45.761 iops : min= 6156, max= 7168, avg=6662.00, stdev=715.59, samples=2 00:09:45.761 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.08% 00:09:45.761 lat (msec) : 2=0.18%, 4=2.15%, 10=58.60%, 20=35.73%, 50=3.22% 00:09:45.761 cpu : usr=4.99%, sys=7.92%, ctx=403, majf=0, minf=2 00:09:45.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:45.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.761 issued rwts: total=6416,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.761 00:09:45.761 Run status group 0 (all jobs): 00:09:45.761 READ: bw=87.2MiB/s (91.4MB/s), 16.6MiB/s-28.5MiB/s (17.4MB/s-29.9MB/s), io=91.0MiB (95.4MB), run=1003-1043msec 00:09:45.761 WRITE: bw=90.3MiB/s (94.7MB/s), 17.3MiB/s-29.9MiB/s (18.1MB/s-31.3MB/s), io=94.2MiB (98.8MB), run=1003-1043msec 00:09:45.761 00:09:45.761 Disk stats (read/write): 00:09:45.761 nvme0n1: ios=4309/4608, merge=0/0, ticks=24725/34092, in_queue=58817, util=98.30% 00:09:45.761 nvme0n2: ios=6123/6144, merge=0/0, ticks=47649/43242, in_queue=90891, util=96.94% 00:09:45.761 nvme0n3: ios=3683/4096, merge=0/0, ticks=24888/23054, in_queue=47942, util=96.94% 00:09:45.761 nvme0n4: ios=5325/5632, merge=0/0, ticks=37752/37576, in_queue=75328, util=88.66% 00:09:45.761 11:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:45.761 [global] 00:09:45.761 thread=1 00:09:45.761 invalidate=1 00:09:45.761 rw=randwrite 00:09:45.761 time_based=1 00:09:45.761 runtime=1 00:09:45.761 ioengine=libaio 00:09:45.761 direct=1 00:09:45.761 bs=4096 00:09:45.761 iodepth=128 00:09:45.761 norandommap=0 00:09:45.761 numjobs=1 00:09:45.761 00:09:45.761 verify_dump=1 00:09:45.761 verify_backlog=512 00:09:45.761 verify_state_save=0 00:09:45.761 do_verify=1 00:09:45.761 verify=crc32c-intel 00:09:45.761 [job0] 00:09:45.761 filename=/dev/nvme0n1 00:09:45.761 [job1] 00:09:45.761 filename=/dev/nvme0n2 00:09:45.761 [job2] 00:09:45.762 filename=/dev/nvme0n3 00:09:45.762 [job3] 00:09:45.762 filename=/dev/nvme0n4 00:09:45.762 Could not set queue depth (nvme0n1) 00:09:45.762 Could not set queue depth (nvme0n2) 00:09:45.762 Could not set queue depth (nvme0n3) 00:09:45.762 Could not set queue depth (nvme0n4) 00:09:46.331 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.331 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.331 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.331 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.331 fio-3.35 00:09:46.331 Starting 4 threads 00:09:47.272 00:09:47.272 job0: (groupid=0, jobs=1): err= 0: pid=4101396: Mon Dec 9 11:44:55 2024 00:09:47.272 read: IOPS=7258, BW=28.4MiB/s (29.7MB/s)(28.5MiB/1005msec) 00:09:47.272 slat (nsec): min=1008, max=7860.2k, avg=68076.85, stdev=494957.92 00:09:47.272 clat (usec): min=2919, max=21265, avg=8879.48, stdev=2035.08 00:09:47.272 lat (usec): min=4815, max=21267, avg=8947.56, stdev=2076.06 00:09:47.272 clat percentiles (usec): 00:09:47.272 | 1.00th=[ 5866], 5.00th=[ 6652], 10.00th=[ 6980], 20.00th=[ 7504], 00:09:47.272 | 30.00th=[ 7898], 40.00th=[ 8094], 50.00th=[ 8291], 60.00th=[ 8586], 00:09:47.272 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[11207], 95.00th=[13042], 00:09:47.272 | 99.00th=[16319], 99.50th=[18220], 99.90th=[20579], 99.95th=[21365], 00:09:47.272 | 99.99th=[21365] 00:09:47.272 write: IOPS=7641, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1005msec); 0 zone resets 00:09:47.272 slat (nsec): min=1647, max=9111.8k, avg=59768.39, stdev=448510.01 00:09:47.273 clat (usec): min=1190, max=21260, avg=8108.65, stdev=3170.10 00:09:47.273 lat (usec): min=1199, max=21268, avg=8168.42, stdev=3198.12 00:09:47.273 clat percentiles (usec): 00:09:47.273 | 1.00th=[ 4015], 5.00th=[ 4621], 10.00th=[ 5145], 20.00th=[ 5538], 00:09:47.273 | 30.00th=[ 5997], 40.00th=[ 6456], 50.00th=[ 6980], 60.00th=[ 7963], 00:09:47.273 | 70.00th=[ 8979], 80.00th=[10683], 90.00th=[13173], 95.00th=[15139], 00:09:47.273 | 99.00th=[16909], 99.50th=[17433], 99.90th=[17695], 99.95th=[18744], 00:09:47.273 | 99.99th=[21365] 00:09:47.273 bw ( KiB/s): min=28672, max=32760, per=30.63%, avg=30716.00, stdev=2890.65, samples=2 00:09:47.273 iops : min= 7168, max= 8190, avg=7679.00, stdev=722.66, samples=2 00:09:47.273 lat (msec) : 2=0.10%, 4=0.37%, 10=77.20%, 20=22.18%, 50=0.15% 00:09:47.273 cpu : usr=5.68%, sys=8.67%, ctx=360, majf=0, minf=1 00:09:47.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:47.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.273 issued rwts: total=7295,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.273 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.273 job1: (groupid=0, jobs=1): err= 0: pid=4101402: Mon Dec 9 11:44:55 2024 00:09:47.273 read: IOPS=5312, BW=20.8MiB/s (21.8MB/s)(20.8MiB/1002msec) 00:09:47.273 slat (nsec): min=949, max=23498k, avg=99129.05, stdev=945609.61 00:09:47.273 clat (usec): min=1385, max=76639, avg=14411.04, stdev=15411.54 00:09:47.273 lat (usec): min=1406, max=76649, avg=14510.17, stdev=15498.63 00:09:47.273 clat percentiles (usec): 00:09:47.273 | 1.00th=[ 3326], 5.00th=[ 5211], 10.00th=[ 6063], 20.00th=[ 6521], 00:09:47.273 | 30.00th=[ 6718], 40.00th=[ 7701], 50.00th=[ 8586], 60.00th=[ 9372], 00:09:47.273 | 70.00th=[11600], 80.00th=[15795], 90.00th=[28443], 95.00th=[61080], 00:09:47.273 | 99.00th=[70779], 99.50th=[70779], 99.90th=[77071], 99.95th=[77071], 00:09:47.273 | 99.99th=[77071] 00:09:47.273 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:09:47.273 slat (nsec): min=1557, max=13091k, avg=72839.81, stdev=577119.64 00:09:47.273 clat (usec): min=930, max=41808, avg=8914.80, stdev=5864.96 00:09:47.273 lat (usec): min=932, max=50014, avg=8987.64, stdev=5921.90 00:09:47.273 clat percentiles (usec): 00:09:47.273 | 1.00th=[ 1467], 5.00th=[ 3425], 10.00th=[ 4621], 20.00th=[ 5735], 00:09:47.273 | 30.00th=[ 6194], 40.00th=[ 6521], 50.00th=[ 7046], 60.00th=[ 7898], 00:09:47.273 | 70.00th=[ 8717], 80.00th=[11076], 90.00th=[14484], 95.00th=[22152], 00:09:47.273 | 99.00th=[32375], 99.50th=[34866], 99.90th=[41681], 99.95th=[41681], 00:09:47.273 | 99.99th=[41681] 00:09:47.273 bw ( KiB/s): min=16384, max=28672, per=22.47%, avg=22528.00, stdev=8688.93, samples=2 00:09:47.273 iops : min= 4096, max= 7168, avg=5632.00, stdev=2172.23, samples=2 00:09:47.273 lat (usec) : 1000=0.06% 00:09:47.273 lat (msec) : 2=0.84%, 4=3.91%, 10=64.30%, 20=19.96%, 50=7.31% 00:09:47.273 lat (msec) : 100=3.61% 00:09:47.273 cpu : usr=3.60%, sys=5.49%, ctx=434, majf=0, minf=2 00:09:47.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:47.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.273 issued rwts: total=5323,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.273 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.273 job2: (groupid=0, jobs=1): err= 0: pid=4101403: Mon Dec 9 11:44:55 2024 00:09:47.273 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:09:47.273 slat (nsec): min=972, max=3544.6k, avg=73008.17, stdev=368008.04 00:09:47.273 clat (usec): min=5073, max=13302, avg=9481.21, stdev=1203.65 00:09:47.273 lat (usec): min=5486, max=13312, avg=9554.22, stdev=1202.63 00:09:47.273 clat percentiles (usec): 00:09:47.273 | 1.00th=[ 6390], 5.00th=[ 7373], 10.00th=[ 7963], 20.00th=[ 8455], 00:09:47.273 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:09:47.273 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10945], 95.00th=[11469], 00:09:47.273 | 99.00th=[12256], 99.50th=[12387], 99.90th=[12911], 99.95th=[13304], 00:09:47.273 | 99.99th=[13304] 00:09:47.273 write: IOPS=6945, BW=27.1MiB/s (28.4MB/s)(27.2MiB/1003msec); 0 zone resets 00:09:47.273 slat (nsec): min=1579, max=9088.7k, avg=69812.93, stdev=385112.58 00:09:47.273 clat (usec): min=694, max=25394, avg=9129.69, stdev=1975.23 00:09:47.273 lat (usec): min=2907, max=25425, avg=9199.50, stdev=1977.54 00:09:47.273 clat percentiles (usec): 00:09:47.273 | 1.00th=[ 5669], 5.00th=[ 7111], 10.00th=[ 7570], 20.00th=[ 8160], 00:09:47.273 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:09:47.273 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[10814], 00:09:47.273 | 99.00th=[21627], 99.50th=[22414], 99.90th=[22676], 99.95th=[22676], 00:09:47.273 | 99.99th=[25297] 00:09:47.273 bw ( KiB/s): min=26888, max=27816, per=27.28%, avg=27352.00, stdev=656.20, samples=2 00:09:47.273 iops : min= 6722, max= 6954, avg=6838.00, stdev=164.05, samples=2 00:09:47.273 lat (usec) : 750=0.01% 00:09:47.273 lat (msec) : 4=0.23%, 10=75.72%, 20=23.41%, 50=0.62% 00:09:47.273 cpu : usr=3.79%, sys=4.89%, ctx=635, majf=0, minf=1 00:09:47.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:47.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.273 issued rwts: total=6656,6966,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.273 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.273 job3: (groupid=0, jobs=1): err= 0: pid=4101404: Mon Dec 9 11:44:55 2024 00:09:47.273 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:09:47.273 slat (nsec): min=968, max=26408k, avg=104277.18, stdev=907924.65 00:09:47.273 clat (usec): min=3631, max=77764, avg=14142.63, stdev=13242.58 00:09:47.273 lat (usec): min=3638, max=81495, avg=14246.91, stdev=13317.75 00:09:47.273 clat percentiles (usec): 00:09:47.273 | 1.00th=[ 4817], 5.00th=[ 6456], 10.00th=[ 7308], 20.00th=[ 7701], 00:09:47.273 | 30.00th=[ 7898], 40.00th=[ 8160], 50.00th=[ 8979], 60.00th=[ 9896], 00:09:47.273 | 70.00th=[11207], 80.00th=[18482], 90.00th=[27132], 95.00th=[39584], 00:09:47.273 | 99.00th=[73925], 99.50th=[73925], 99.90th=[78119], 99.95th=[78119], 00:09:47.273 | 99.99th=[78119] 00:09:47.273 write: IOPS=4897, BW=19.1MiB/s (20.1MB/s)(19.2MiB/1004msec); 0 zone resets 00:09:47.273 slat (nsec): min=1619, max=15817k, avg=99652.40, stdev=650929.11 00:09:47.273 clat (usec): min=1972, max=72068, avg=12527.40, stdev=10433.88 00:09:47.273 lat (usec): min=2398, max=72076, avg=12627.05, stdev=10507.93 00:09:47.273 clat percentiles (usec): 00:09:47.273 | 1.00th=[ 3982], 5.00th=[ 4948], 10.00th=[ 5342], 20.00th=[ 6849], 00:09:47.273 | 30.00th=[ 7570], 40.00th=[ 7898], 50.00th=[ 8455], 60.00th=[10421], 00:09:47.273 | 70.00th=[11207], 80.00th=[15270], 90.00th=[23987], 95.00th=[33424], 00:09:47.273 | 99.00th=[64226], 99.50th=[66323], 99.90th=[71828], 99.95th=[71828], 00:09:47.273 | 99.99th=[71828] 00:09:47.273 bw ( KiB/s): min=12288, max=26024, per=19.10%, avg=19156.00, stdev=9712.82, samples=2 00:09:47.273 iops : min= 3072, max= 6506, avg=4789.00, stdev=2428.20, samples=2 00:09:47.273 lat (msec) : 2=0.01%, 4=0.84%, 10=57.02%, 20=26.67%, 50=12.47% 00:09:47.273 lat (msec) : 100=2.99% 00:09:47.273 cpu : usr=2.99%, sys=5.78%, ctx=328, majf=0, minf=1 00:09:47.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:47.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.273 issued rwts: total=4608,4917,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.273 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.273 00:09:47.273 Run status group 0 (all jobs): 00:09:47.273 READ: bw=92.8MiB/s (97.3MB/s), 17.9MiB/s-28.4MiB/s (18.8MB/s-29.7MB/s), io=93.3MiB (97.8MB), run=1002-1005msec 00:09:47.273 WRITE: bw=97.9MiB/s (103MB/s), 19.1MiB/s-29.9MiB/s (20.1MB/s-31.3MB/s), io=98.4MiB (103MB), run=1002-1005msec 00:09:47.273 00:09:47.273 Disk stats (read/write): 00:09:47.273 nvme0n1: ios=6153/6144, merge=0/0, ticks=52298/48140, in_queue=100438, util=84.77% 00:09:47.273 nvme0n2: ios=4924/5120, merge=0/0, ticks=31778/26422, in_queue=58200, util=90.62% 00:09:47.273 nvme0n3: ios=5688/5680, merge=0/0, ticks=15682/14444, in_queue=30126, util=92.31% 00:09:47.273 nvme0n4: ios=3605/3702, merge=0/0, ticks=25625/22947, in_queue=48572, util=92.85% 00:09:47.273 11:44:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:47.534 11:44:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4101734 00:09:47.534 11:44:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:47.534 11:44:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:47.534 [global] 00:09:47.534 thread=1 00:09:47.534 invalidate=1 00:09:47.534 rw=read 00:09:47.534 time_based=1 00:09:47.534 runtime=10 00:09:47.534 ioengine=libaio 00:09:47.534 direct=1 00:09:47.534 bs=4096 00:09:47.534 iodepth=1 00:09:47.534 norandommap=1 00:09:47.534 numjobs=1 00:09:47.534 00:09:47.534 [job0] 00:09:47.534 filename=/dev/nvme0n1 00:09:47.534 [job1] 00:09:47.534 filename=/dev/nvme0n2 00:09:47.534 [job2] 00:09:47.534 filename=/dev/nvme0n3 00:09:47.534 [job3] 00:09:47.534 filename=/dev/nvme0n4 00:09:47.534 Could not set queue depth (nvme0n1) 00:09:47.534 Could not set queue depth (nvme0n2) 00:09:47.534 Could not set queue depth (nvme0n3) 00:09:47.534 Could not set queue depth (nvme0n4) 00:09:47.795 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.795 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.795 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.795 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.795 fio-3.35 00:09:47.795 Starting 4 threads 00:09:50.342 11:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:50.603 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=13656064, buflen=4096 00:09:50.603 fio: pid=4101928, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:50.603 11:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:50.863 11:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.863 11:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:50.863 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=274432, buflen=4096 00:09:50.863 fio: pid=4101927, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:50.863 11:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.863 11:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:50.863 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=3289088, buflen=4096 00:09:50.863 fio: pid=4101925, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:51.124 11:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.124 11:44:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:51.124 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=3780608, buflen=4096 00:09:51.124 fio: pid=4101926, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:51.124 00:09:51.124 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4101925: Mon Dec 9 11:44:58 2024 00:09:51.124 read: IOPS=268, BW=1072KiB/s (1098kB/s)(3212KiB/2996msec) 00:09:51.124 slat (usec): min=6, max=2703, avg=26.35, stdev=95.20 00:09:51.124 clat (usec): min=425, max=42040, avg=3671.41, stdev=10482.84 00:09:51.124 lat (usec): min=451, max=44004, avg=3697.49, stdev=10495.98 00:09:51.124 clat percentiles (usec): 00:09:51.124 | 1.00th=[ 545], 5.00th=[ 660], 10.00th=[ 685], 20.00th=[ 725], 00:09:51.124 | 30.00th=[ 758], 40.00th=[ 775], 50.00th=[ 791], 60.00th=[ 807], 00:09:51.124 | 70.00th=[ 824], 80.00th=[ 848], 90.00th=[ 881], 95.00th=[41157], 00:09:51.124 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:51.124 | 99.99th=[42206] 00:09:51.124 bw ( KiB/s): min= 96, max= 4984, per=19.67%, avg=1265.60, stdev=2118.91, samples=5 00:09:51.124 iops : min= 24, max= 1246, avg=316.40, stdev=529.73, samples=5 00:09:51.124 lat (usec) : 500=0.50%, 750=26.49%, 1000=65.80% 00:09:51.124 lat (msec) : 50=7.09% 00:09:51.124 cpu : usr=0.40%, sys=0.57%, ctx=806, majf=0, minf=1 00:09:51.124 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.124 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.124 issued rwts: total=804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.124 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.124 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4101926: Mon Dec 9 11:44:58 2024 00:09:51.124 read: IOPS=289, BW=1158KiB/s (1186kB/s)(3692KiB/3189msec) 00:09:51.124 slat (usec): min=7, max=14569, avg=62.86, stdev=659.01 00:09:51.124 clat (usec): min=593, max=43005, avg=3357.16, stdev=9639.01 00:09:51.124 lat (usec): min=619, max=56997, avg=3420.06, stdev=9807.26 00:09:51.124 clat percentiles (usec): 00:09:51.124 | 1.00th=[ 717], 5.00th=[ 799], 10.00th=[ 848], 20.00th=[ 898], 00:09:51.124 | 30.00th=[ 930], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 996], 00:09:51.124 | 70.00th=[ 1012], 80.00th=[ 1037], 90.00th=[ 1074], 95.00th=[41681], 00:09:51.124 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:09:51.124 | 99.99th=[43254] 00:09:51.124 bw ( KiB/s): min= 89, max= 4064, per=19.05%, avg=1225.50, stdev=1789.99, samples=6 00:09:51.124 iops : min= 22, max= 1016, avg=306.33, stdev=447.53, samples=6 00:09:51.124 lat (usec) : 750=1.73%, 1000=62.45% 00:09:51.124 lat (msec) : 2=29.87%, 50=5.84% 00:09:51.125 cpu : usr=0.38%, sys=1.29%, ctx=927, majf=0, minf=2 00:09:51.125 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.125 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.125 issued rwts: total=924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.125 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.125 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4101927: Mon Dec 9 11:44:58 2024 00:09:51.125 read: IOPS=24, BW=95.1KiB/s (97.4kB/s)(268KiB/2818msec) 00:09:51.125 slat (usec): min=25, max=7632, avg=138.35, stdev=922.34 00:09:51.125 clat (usec): min=992, max=43086, avg=41576.58, stdev=5057.73 00:09:51.125 lat (usec): min=1022, max=48969, avg=41716.62, stdev=5136.56 00:09:51.125 clat percentiles (usec): 00:09:51.125 | 1.00th=[ 996], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:09:51.125 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:51.125 | 70.00th=[42206], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:09:51.125 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:51.125 | 99.99th=[43254] 00:09:51.125 bw ( KiB/s): min= 88, max= 96, per=1.46%, avg=94.40, stdev= 3.58, samples=5 00:09:51.125 iops : min= 22, max= 24, avg=23.60, stdev= 0.89, samples=5 00:09:51.125 lat (usec) : 1000=1.47% 00:09:51.125 lat (msec) : 50=97.06% 00:09:51.125 cpu : usr=0.00%, sys=0.11%, ctx=70, majf=0, minf=2 00:09:51.125 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.125 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.125 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.125 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.125 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4101928: Mon Dec 9 11:44:58 2024 00:09:51.125 read: IOPS=1280, BW=5121KiB/s (5244kB/s)(13.0MiB/2604msec) 00:09:51.125 slat (nsec): min=6365, max=74937, avg=24949.13, stdev=7911.31 00:09:51.125 clat (usec): min=215, max=1020, avg=743.91, stdev=95.60 00:09:51.125 lat (usec): min=222, max=1047, avg=768.87, stdev=98.22 00:09:51.125 clat percentiles (usec): 00:09:51.125 | 1.00th=[ 461], 5.00th=[ 578], 10.00th=[ 611], 20.00th=[ 676], 00:09:51.125 | 30.00th=[ 701], 40.00th=[ 725], 50.00th=[ 758], 60.00th=[ 775], 00:09:51.125 | 70.00th=[ 799], 80.00th=[ 824], 90.00th=[ 857], 95.00th=[ 873], 00:09:51.125 | 99.00th=[ 922], 99.50th=[ 947], 99.90th=[ 988], 99.95th=[ 1012], 00:09:51.125 | 99.99th=[ 1020] 00:09:51.125 bw ( KiB/s): min= 5008, max= 5296, per=80.41%, avg=5171.20, stdev=135.84, samples=5 00:09:51.125 iops : min= 1252, max= 1324, avg=1292.80, stdev=33.96, samples=5 00:09:51.125 lat (usec) : 250=0.03%, 500=1.56%, 750=46.45%, 1000=51.84% 00:09:51.125 lat (msec) : 2=0.09% 00:09:51.125 cpu : usr=1.84%, sys=4.99%, ctx=3336, majf=0, minf=2 00:09:51.125 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.125 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.125 issued rwts: total=3335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.125 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.125 00:09:51.125 Run status group 0 (all jobs): 00:09:51.125 READ: bw=6431KiB/s (6585kB/s), 95.1KiB/s-5121KiB/s (97.4kB/s-5244kB/s), io=20.0MiB (21.0MB), run=2604-3189msec 00:09:51.125 00:09:51.125 Disk stats (read/write): 00:09:51.125 nvme0n1: ios=799/0, merge=0/0, ticks=2770/0, in_queue=2770, util=94.66% 00:09:51.125 nvme0n2: ios=921/0, merge=0/0, ticks=2982/0, in_queue=2982, util=95.10% 00:09:51.125 nvme0n3: ios=62/0, merge=0/0, ticks=2579/0, in_queue=2579, util=95.99% 00:09:51.125 nvme0n4: ios=3334/0, merge=0/0, ticks=2189/0, in_queue=2189, util=96.38% 00:09:51.386 11:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.386 11:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:51.647 11:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.647 11:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:51.647 11:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.647 11:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:51.908 11:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.908 11:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:52.169 11:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:52.169 11:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 4101734 00:09:52.169 11:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:52.169 11:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:52.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.169 11:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:52.169 11:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:52.169 11:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:52.169 11:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:52.169 11:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:52.169 11:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:52.169 11:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:52.169 11:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:52.169 11:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:52.169 nvmf hotplug test: fio failed as expected 00:09:52.169 11:44:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # sync 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # set +e 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # for i in {1..20} 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:09:52.430 rmmod nvme_tcp 00:09:52.430 rmmod nvme_fabrics 00:09:52.430 rmmod nvme_keyring 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # set -e 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@130 -- # return 0 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 4098173 ']' 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 4098173 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 4098173 ']' 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 4098173 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4098173 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4098173' 00:09:52.430 killing process with pid 4098173 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 4098173 00:09:52.430 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 4098173 00:09:52.691 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:52.691 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:52.691 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:52.691 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # iptr 00:09:52.691 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:09:52.691 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:52.691 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:09:52.691 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.691 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # remove_spdk_ns 00:09:52.691 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.691 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.691 11:45:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:09:55.240 00:09:55.240 real 0m29.171s 00:09:55.240 user 2m40.615s 00:09:55.240 sys 0m9.562s 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.240 ************************************ 00:09:55.240 END TEST nvmf_fio_target 00:09:55.240 ************************************ 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:55.240 ************************************ 00:09:55.240 START TEST nvmf_bdevio 00:09:55.240 ************************************ 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:55.240 * Looking for test storage... 00:09:55.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:55.240 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:55.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.241 --rc genhtml_branch_coverage=1 00:09:55.241 --rc genhtml_function_coverage=1 00:09:55.241 --rc genhtml_legend=1 00:09:55.241 --rc geninfo_all_blocks=1 00:09:55.241 --rc geninfo_unexecuted_blocks=1 00:09:55.241 00:09:55.241 ' 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:55.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.241 --rc genhtml_branch_coverage=1 00:09:55.241 --rc genhtml_function_coverage=1 00:09:55.241 --rc genhtml_legend=1 00:09:55.241 --rc geninfo_all_blocks=1 00:09:55.241 --rc geninfo_unexecuted_blocks=1 00:09:55.241 00:09:55.241 ' 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:55.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.241 --rc genhtml_branch_coverage=1 00:09:55.241 --rc genhtml_function_coverage=1 00:09:55.241 --rc genhtml_legend=1 00:09:55.241 --rc geninfo_all_blocks=1 00:09:55.241 --rc geninfo_unexecuted_blocks=1 00:09:55.241 00:09:55.241 ' 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:55.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.241 --rc genhtml_branch_coverage=1 00:09:55.241 --rc genhtml_function_coverage=1 00:09:55.241 --rc genhtml_legend=1 00:09:55.241 --rc geninfo_all_blocks=1 00:09:55.241 --rc geninfo_unexecuted_blocks=1 00:09:55.241 00:09:55.241 ' 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # : 0 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:09:55.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@56 -- # have_pci_nics=0 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # xtrace_disable 00:09:55.241 11:45:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_devs=() 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_devs 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_net_devs=() 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # pci_drivers=() 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # local -A pci_drivers 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # net_devs=() 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga net_devs 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # e810=() 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga e810 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # x722=() 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga x722 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@323 -- # mlx=() 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@323 -- # local -ga mlx 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:03.383 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:03.383 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:03.383 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:03.383 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:03.383 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:10:03.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:10:03.384 00:10:03.384 --- 10.0.0.2 ping statistics --- 00:10:03.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.384 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:03.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:10:03.384 00:10:03.384 --- 10.0.0.1 ping statistics --- 00:10:03.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.384 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=4107417 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 4107417 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 4107417 ']' 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.384 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.384 [2024-12-09 11:45:10.450889] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:10:03.384 [2024-12-09 11:45:10.450960] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.384 [2024-12-09 11:45:10.549677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:03.384 [2024-12-09 11:45:10.602153] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.384 [2024-12-09 11:45:10.602210] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.384 [2024-12-09 11:45:10.602218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.384 [2024-12-09 11:45:10.602226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.384 [2024-12-09 11:45:10.602232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.384 [2024-12-09 11:45:10.604575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:03.384 [2024-12-09 11:45:10.604726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:03.384 [2024-12-09 11:45:10.604887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.384 [2024-12-09 11:45:10.604887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:03.384 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.645 [2024-12-09 11:45:11.319231] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.645 Malloc0 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.645 [2024-12-09 11:45:11.395372] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:03.645 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:03.645 { 00:10:03.645 "params": { 00:10:03.645 "name": "Nvme$subsystem", 00:10:03.645 "trtype": "$TEST_TRANSPORT", 00:10:03.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.645 "adrfam": "ipv4", 00:10:03.645 "trsvcid": "$NVMF_PORT", 00:10:03.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.646 "hdgst": ${hdgst:-false}, 00:10:03.646 "ddgst": ${ddgst:-false} 00:10:03.646 }, 00:10:03.646 "method": "bdev_nvme_attach_controller" 00:10:03.646 } 00:10:03.646 EOF 00:10:03.646 )") 00:10:03.646 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:10:03.646 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:10:03.646 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:10:03.646 11:45:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:03.646 "params": { 00:10:03.646 "name": "Nvme1", 00:10:03.646 "trtype": "tcp", 00:10:03.646 "traddr": "10.0.0.2", 00:10:03.646 "adrfam": "ipv4", 00:10:03.646 "trsvcid": "4420", 00:10:03.646 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.646 "hdgst": false, 00:10:03.646 "ddgst": false 00:10:03.646 }, 00:10:03.646 "method": "bdev_nvme_attach_controller" 00:10:03.646 }' 00:10:03.646 [2024-12-09 11:45:11.454909] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:10:03.646 [2024-12-09 11:45:11.454983] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4107775 ] 00:10:03.906 [2024-12-09 11:45:11.550689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:03.906 [2024-12-09 11:45:11.608184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.906 [2024-12-09 11:45:11.608316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.906 [2024-12-09 11:45:11.608320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.906 I/O targets: 00:10:03.906 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:03.906 00:10:03.906 00:10:03.906 CUnit - A unit testing framework for C - Version 2.1-3 00:10:03.906 http://cunit.sourceforge.net/ 00:10:03.906 00:10:03.906 00:10:03.906 Suite: bdevio tests on: Nvme1n1 00:10:04.167 Test: blockdev write read block ...passed 00:10:04.167 Test: blockdev write zeroes read block ...passed 00:10:04.167 Test: blockdev write zeroes read no split ...passed 00:10:04.167 Test: blockdev write zeroes read split ...passed 00:10:04.167 Test: blockdev write zeroes read split partial ...passed 00:10:04.167 Test: blockdev reset ...[2024-12-09 11:45:11.897671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:04.167 [2024-12-09 11:45:11.897744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe53580 (9): Bad file descriptor 00:10:04.167 [2024-12-09 11:45:11.955085] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:04.167 passed 00:10:04.167 Test: blockdev write read 8 blocks ...passed 00:10:04.167 Test: blockdev write read size > 128k ...passed 00:10:04.167 Test: blockdev write read invalid size ...passed 00:10:04.167 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:04.167 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:04.167 Test: blockdev write read max offset ...passed 00:10:04.428 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:04.428 Test: blockdev writev readv 8 blocks ...passed 00:10:04.428 Test: blockdev writev readv 30 x 1block ...passed 00:10:04.428 Test: blockdev writev readv block ...passed 00:10:04.428 Test: blockdev writev readv size > 128k ...passed 00:10:04.428 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:04.428 Test: blockdev comparev and writev ...[2024-12-09 11:45:12.134133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:04.428 [2024-12-09 11:45:12.134160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:04.428 [2024-12-09 11:45:12.134171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:04.428 [2024-12-09 11:45:12.134177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:04.428 [2024-12-09 11:45:12.134481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:04.428 [2024-12-09 11:45:12.134489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:04.428 [2024-12-09 11:45:12.134499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:04.428 [2024-12-09 11:45:12.134505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:04.428 [2024-12-09 11:45:12.134810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:04.428 [2024-12-09 11:45:12.134819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:04.428 [2024-12-09 11:45:12.134829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:04.428 [2024-12-09 11:45:12.134834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:04.428 [2024-12-09 11:45:12.135151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:04.428 [2024-12-09 11:45:12.135163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:04.428 [2024-12-09 11:45:12.135173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:04.428 [2024-12-09 11:45:12.135179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:04.428 passed 00:10:04.428 Test: blockdev nvme passthru rw ...passed 00:10:04.428 Test: blockdev nvme passthru vendor specific ...[2024-12-09 11:45:12.218073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:04.428 [2024-12-09 11:45:12.218084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:04.428 [2024-12-09 11:45:12.218298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:04.428 [2024-12-09 11:45:12.218305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:04.428 [2024-12-09 11:45:12.218536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:04.429 [2024-12-09 11:45:12.218544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:04.429 [2024-12-09 11:45:12.218790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:04.429 [2024-12-09 11:45:12.218798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:04.429 passed 00:10:04.429 Test: blockdev nvme admin passthru ...passed 00:10:04.429 Test: blockdev copy ...passed 00:10:04.429 00:10:04.429 Run Summary: Type Total Ran Passed Failed Inactive 00:10:04.429 suites 1 1 n/a 0 0 00:10:04.429 tests 23 23 23 0 0 00:10:04.429 asserts 152 152 152 0 n/a 00:10:04.429 00:10:04.429 Elapsed time = 1.020 seconds 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # sync 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # set +e 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # for i in {1..20} 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:10:04.691 rmmod nvme_tcp 00:10:04.691 rmmod nvme_fabrics 00:10:04.691 rmmod nvme_keyring 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # set -e 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@130 -- # return 0 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 4107417 ']' 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 4107417 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 4107417 ']' 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 4107417 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4107417 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4107417' 00:10:04.691 killing process with pid 4107417 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 4107417 00:10:04.691 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 4107417 00:10:04.953 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:04.953 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:04.953 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:04.953 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # iptr 00:10:04.953 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:10:04.953 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:04.953 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:10:04.953 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:04.953 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # remove_spdk_ns 00:10:04.953 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.953 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.953 11:45:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.503 11:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:10:07.503 00:10:07.503 real 0m12.217s 00:10:07.503 user 0m12.459s 00:10:07.503 sys 0m6.335s 00:10:07.503 11:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.503 11:45:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.503 ************************************ 00:10:07.503 END TEST nvmf_bdevio 00:10:07.503 ************************************ 00:10:07.503 11:45:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:07.503 00:10:07.503 real 5m3.311s 00:10:07.503 user 11m46.877s 00:10:07.503 sys 1m50.462s 00:10:07.503 11:45:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.503 11:45:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:07.503 ************************************ 00:10:07.503 END TEST nvmf_target_core 00:10:07.503 ************************************ 00:10:07.503 11:45:14 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:07.503 11:45:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:07.503 11:45:14 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.503 11:45:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:07.503 ************************************ 00:10:07.503 START TEST nvmf_target_extra 00:10:07.503 ************************************ 00:10:07.503 11:45:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:07.504 * Looking for test storage... 00:10:07.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:07.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.504 --rc genhtml_branch_coverage=1 00:10:07.504 --rc genhtml_function_coverage=1 00:10:07.504 --rc genhtml_legend=1 00:10:07.504 --rc geninfo_all_blocks=1 00:10:07.504 --rc geninfo_unexecuted_blocks=1 00:10:07.504 00:10:07.504 ' 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:07.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.504 --rc genhtml_branch_coverage=1 00:10:07.504 --rc genhtml_function_coverage=1 00:10:07.504 --rc genhtml_legend=1 00:10:07.504 --rc geninfo_all_blocks=1 00:10:07.504 --rc geninfo_unexecuted_blocks=1 00:10:07.504 00:10:07.504 ' 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:07.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.504 --rc genhtml_branch_coverage=1 00:10:07.504 --rc genhtml_function_coverage=1 00:10:07.504 --rc genhtml_legend=1 00:10:07.504 --rc geninfo_all_blocks=1 00:10:07.504 --rc geninfo_unexecuted_blocks=1 00:10:07.504 00:10:07.504 ' 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:07.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.504 --rc genhtml_branch_coverage=1 00:10:07.504 --rc genhtml_function_coverage=1 00:10:07.504 --rc genhtml_legend=1 00:10:07.504 --rc geninfo_all_blocks=1 00:10:07.504 --rc geninfo_unexecuted_blocks=1 00:10:07.504 00:10:07.504 ' 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # : 0 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:10:07.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@56 -- # have_pci_nics=0 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:07.504 ************************************ 00:10:07.504 START TEST nvmf_example 00:10:07.504 ************************************ 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:07.504 * Looking for test storage... 00:10:07.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:07.504 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:07.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.767 --rc genhtml_branch_coverage=1 00:10:07.767 --rc genhtml_function_coverage=1 00:10:07.767 --rc genhtml_legend=1 00:10:07.767 --rc geninfo_all_blocks=1 00:10:07.767 --rc geninfo_unexecuted_blocks=1 00:10:07.767 00:10:07.767 ' 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:07.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.767 --rc genhtml_branch_coverage=1 00:10:07.767 --rc genhtml_function_coverage=1 00:10:07.767 --rc genhtml_legend=1 00:10:07.767 --rc geninfo_all_blocks=1 00:10:07.767 --rc geninfo_unexecuted_blocks=1 00:10:07.767 00:10:07.767 ' 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:07.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.767 --rc genhtml_branch_coverage=1 00:10:07.767 --rc genhtml_function_coverage=1 00:10:07.767 --rc genhtml_legend=1 00:10:07.767 --rc geninfo_all_blocks=1 00:10:07.767 --rc geninfo_unexecuted_blocks=1 00:10:07.767 00:10:07.767 ' 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:07.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.767 --rc genhtml_branch_coverage=1 00:10:07.767 --rc genhtml_function_coverage=1 00:10:07.767 --rc genhtml_legend=1 00:10:07.767 --rc geninfo_all_blocks=1 00:10:07.767 --rc geninfo_unexecuted_blocks=1 00:10:07.767 00:10:07.767 ' 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # : 0 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:10:07.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@56 -- # have_pci_nics=0 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:07.767 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.768 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.768 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.768 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:07.768 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:07.768 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # xtrace_disable 00:10:07.768 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_devs=() 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_devs 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_net_devs=() 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # pci_drivers=() 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # local -A pci_drivers 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # net_devs=() 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga net_devs 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # e810=() 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga e810 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # x722=() 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga x722 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@323 -- # mlx=() 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@323 -- # local -ga mlx 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:15.911 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:15.911 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:15.911 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:15.911 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # is_hw=yes 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:15.911 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:10:15.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:15.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:10:15.912 00:10:15.912 --- 10.0.0.2 ping statistics --- 00:10:15.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.912 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:15.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:15.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:10:15.912 00:10:15.912 --- 10.0.0.1 ping statistics --- 00:10:15.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.912 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # return 0 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=4112435 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 4112435 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 4112435 ']' 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.912 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:15.912 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:28.143 Initializing NVMe Controllers 00:10:28.143 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:28.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:28.143 Initialization complete. Launching workers. 00:10:28.143 ======================================================== 00:10:28.143 Latency(us) 00:10:28.143 Device Information : IOPS MiB/s Average min max 00:10:28.143 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19149.55 74.80 3341.75 627.38 15495.87 00:10:28.143 ======================================================== 00:10:28.143 Total : 19149.55 74.80 3341.75 627.38 15495.87 00:10:28.143 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # sync 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # set +e 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # for i in {1..20} 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:10:28.143 rmmod nvme_tcp 00:10:28.143 rmmod nvme_fabrics 00:10:28.143 rmmod nvme_keyring 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # set -e 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@130 -- # return 0 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@513 -- # '[' -n 4112435 ']' 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # killprocess 4112435 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 4112435 ']' 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 4112435 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4112435 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4112435' 00:10:28.143 killing process with pid 4112435 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 4112435 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 4112435 00:10:28.143 nvmf threads initialize successfully 00:10:28.143 bdev subsystem init successfully 00:10:28.143 created a nvmf target service 00:10:28.143 create targets's poll groups done 00:10:28.143 all subsystems of target started 00:10:28.143 nvmf target is running 00:10:28.143 all subsystems of target stopped 00:10:28.143 destroy targets's poll groups done 00:10:28.143 destroyed the nvmf target service 00:10:28.143 bdev subsystem finish successfully 00:10:28.143 nvmf threads destroy successfully 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # iptr 00:10:28.143 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-save 00:10:28.144 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:28.144 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-restore 00:10:28.144 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:28.144 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # remove_spdk_ns 00:10:28.144 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.144 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.144 11:45:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.715 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:10:28.715 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:28.715 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:28.715 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.715 00:10:28.715 real 0m21.230s 00:10:28.715 user 0m46.686s 00:10:28.715 sys 0m6.864s 00:10:28.715 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.715 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.715 ************************************ 00:10:28.715 END TEST nvmf_example 00:10:28.715 ************************************ 00:10:28.715 11:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:28.715 11:45:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:28.715 11:45:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.715 11:45:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:28.715 ************************************ 00:10:28.715 START TEST nvmf_filesystem 00:10:28.715 ************************************ 00:10:28.715 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:28.979 * Looking for test storage... 00:10:28.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:28.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.979 --rc genhtml_branch_coverage=1 00:10:28.979 --rc genhtml_function_coverage=1 00:10:28.979 --rc genhtml_legend=1 00:10:28.979 --rc geninfo_all_blocks=1 00:10:28.979 --rc geninfo_unexecuted_blocks=1 00:10:28.979 00:10:28.979 ' 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:28.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.979 --rc genhtml_branch_coverage=1 00:10:28.979 --rc genhtml_function_coverage=1 00:10:28.979 --rc genhtml_legend=1 00:10:28.979 --rc geninfo_all_blocks=1 00:10:28.979 --rc geninfo_unexecuted_blocks=1 00:10:28.979 00:10:28.979 ' 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:28.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.979 --rc genhtml_branch_coverage=1 00:10:28.979 --rc genhtml_function_coverage=1 00:10:28.979 --rc genhtml_legend=1 00:10:28.979 --rc geninfo_all_blocks=1 00:10:28.979 --rc geninfo_unexecuted_blocks=1 00:10:28.979 00:10:28.979 ' 00:10:28.979 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:28.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.979 --rc genhtml_branch_coverage=1 00:10:28.979 --rc genhtml_function_coverage=1 00:10:28.979 --rc genhtml_legend=1 00:10:28.980 --rc geninfo_all_blocks=1 00:10:28.980 --rc geninfo_unexecuted_blocks=1 00:10:28.980 00:10:28.980 ' 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:28.980 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:28.981 #define SPDK_CONFIG_H 00:10:28.981 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:28.981 #define SPDK_CONFIG_APPS 1 00:10:28.981 #define SPDK_CONFIG_ARCH native 00:10:28.981 #undef SPDK_CONFIG_ASAN 00:10:28.981 #undef SPDK_CONFIG_AVAHI 00:10:28.981 #undef SPDK_CONFIG_CET 00:10:28.981 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:28.981 #define SPDK_CONFIG_COVERAGE 1 00:10:28.981 #define SPDK_CONFIG_CROSS_PREFIX 00:10:28.981 #undef SPDK_CONFIG_CRYPTO 00:10:28.981 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:28.981 #undef SPDK_CONFIG_CUSTOMOCF 00:10:28.981 #undef SPDK_CONFIG_DAOS 00:10:28.981 #define SPDK_CONFIG_DAOS_DIR 00:10:28.981 #define SPDK_CONFIG_DEBUG 1 00:10:28.981 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:28.981 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:28.981 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:28.981 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:28.981 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:28.981 #undef SPDK_CONFIG_DPDK_UADK 00:10:28.981 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:28.981 #define SPDK_CONFIG_EXAMPLES 1 00:10:28.981 #undef SPDK_CONFIG_FC 00:10:28.981 #define SPDK_CONFIG_FC_PATH 00:10:28.981 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:28.981 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:28.981 #define SPDK_CONFIG_FSDEV 1 00:10:28.981 #undef SPDK_CONFIG_FUSE 00:10:28.981 #undef SPDK_CONFIG_FUZZER 00:10:28.981 #define SPDK_CONFIG_FUZZER_LIB 00:10:28.981 #undef SPDK_CONFIG_GOLANG 00:10:28.981 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:28.981 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:28.981 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:28.981 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:28.981 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:28.981 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:28.981 #undef SPDK_CONFIG_HAVE_LZ4 00:10:28.981 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:28.981 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:28.981 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:28.981 #define SPDK_CONFIG_IDXD 1 00:10:28.981 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:28.981 #undef SPDK_CONFIG_IPSEC_MB 00:10:28.981 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:28.981 #define SPDK_CONFIG_ISAL 1 00:10:28.981 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:28.981 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:28.981 #define SPDK_CONFIG_LIBDIR 00:10:28.981 #undef SPDK_CONFIG_LTO 00:10:28.981 #define SPDK_CONFIG_MAX_LCORES 128 00:10:28.981 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:28.981 #define SPDK_CONFIG_NVME_CUSE 1 00:10:28.981 #undef SPDK_CONFIG_OCF 00:10:28.981 #define SPDK_CONFIG_OCF_PATH 00:10:28.981 #define SPDK_CONFIG_OPENSSL_PATH 00:10:28.981 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:28.981 #define SPDK_CONFIG_PGO_DIR 00:10:28.981 #undef SPDK_CONFIG_PGO_USE 00:10:28.981 #define SPDK_CONFIG_PREFIX /usr/local 00:10:28.981 #undef SPDK_CONFIG_RAID5F 00:10:28.981 #undef SPDK_CONFIG_RBD 00:10:28.981 #define SPDK_CONFIG_RDMA 1 00:10:28.981 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:28.981 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:28.981 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:28.981 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:28.981 #define SPDK_CONFIG_SHARED 1 00:10:28.981 #undef SPDK_CONFIG_SMA 00:10:28.981 #define SPDK_CONFIG_TESTS 1 00:10:28.981 #undef SPDK_CONFIG_TSAN 00:10:28.981 #define SPDK_CONFIG_UBLK 1 00:10:28.981 #define SPDK_CONFIG_UBSAN 1 00:10:28.981 #undef SPDK_CONFIG_UNIT_TESTS 00:10:28.981 #undef SPDK_CONFIG_URING 00:10:28.981 #define SPDK_CONFIG_URING_PATH 00:10:28.981 #undef SPDK_CONFIG_URING_ZNS 00:10:28.981 #undef SPDK_CONFIG_USDT 00:10:28.981 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:28.981 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:28.981 #define SPDK_CONFIG_VFIO_USER 1 00:10:28.981 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:28.981 #define SPDK_CONFIG_VHOST 1 00:10:28.981 #define SPDK_CONFIG_VIRTIO 1 00:10:28.981 #undef SPDK_CONFIG_VTUNE 00:10:28.981 #define SPDK_CONFIG_VTUNE_DIR 00:10:28.981 #define SPDK_CONFIG_WERROR 1 00:10:28.981 #define SPDK_CONFIG_WPDK_DIR 00:10:28.981 #undef SPDK_CONFIG_XNVME 00:10:28.981 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:28.981 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:28.982 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:28.983 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 4115232 ]] 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 4115232 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.fVrgCD 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.fVrgCD/tests/target /tmp/spdk.fVrgCD 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:28.984 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=122634031104 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356521472 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6722490368 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64668229632 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678260736 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847943168 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871306752 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23363584 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677728256 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678260736 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=532480 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:29.246 * Looking for test storage... 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=122634031104 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8937082880 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:29.246 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.247 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:29.247 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.247 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.247 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.247 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:29.247 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.247 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:29.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.247 --rc genhtml_branch_coverage=1 00:10:29.247 --rc genhtml_function_coverage=1 00:10:29.247 --rc genhtml_legend=1 00:10:29.247 --rc geninfo_all_blocks=1 00:10:29.247 --rc geninfo_unexecuted_blocks=1 00:10:29.247 00:10:29.247 ' 00:10:29.247 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:29.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.247 --rc genhtml_branch_coverage=1 00:10:29.247 --rc genhtml_function_coverage=1 00:10:29.247 --rc genhtml_legend=1 00:10:29.247 --rc geninfo_all_blocks=1 00:10:29.247 --rc geninfo_unexecuted_blocks=1 00:10:29.247 00:10:29.247 ' 00:10:29.247 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:29.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.247 --rc genhtml_branch_coverage=1 00:10:29.247 --rc genhtml_function_coverage=1 00:10:29.247 --rc genhtml_legend=1 00:10:29.247 --rc geninfo_all_blocks=1 00:10:29.247 --rc geninfo_unexecuted_blocks=1 00:10:29.247 00:10:29.247 ' 00:10:29.247 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:29.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.247 --rc genhtml_branch_coverage=1 00:10:29.247 --rc genhtml_function_coverage=1 00:10:29.247 --rc genhtml_legend=1 00:10:29.247 --rc geninfo_all_blocks=1 00:10:29.247 --rc geninfo_unexecuted_blocks=1 00:10:29.247 00:10:29.247 ' 00:10:29.247 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.247 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:29.247 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.247 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.247 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.247 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.247 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.247 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.247 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.247 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.247 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.247 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # : 0 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:10:29.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@56 -- # have_pci_nics=0 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # xtrace_disable 00:10:29.247 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.392 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.392 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_devs=() 00:10:37.392 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_devs 00:10:37.392 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_net_devs=() 00:10:37.392 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:10:37.392 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # pci_drivers=() 00:10:37.393 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # local -A pci_drivers 00:10:37.393 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # net_devs=() 00:10:37.393 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga net_devs 00:10:37.393 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # e810=() 00:10:37.393 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga e810 00:10:37.393 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # x722=() 00:10:37.393 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga x722 00:10:37.393 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@323 -- # mlx=() 00:10:37.393 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@323 -- # local -ga mlx 00:10:37.393 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.393 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.393 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.393 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.393 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.393 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.393 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.393 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:37.394 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.394 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.394 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.394 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.394 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:10:37.394 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:10:37.394 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:10:37.394 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:10:37.394 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:10:37.394 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:10:37.394 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:10:37.395 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:37.395 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:37.395 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:10:37.395 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:10:37.395 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.395 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.395 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:10:37.395 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:10:37.395 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:37.395 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:37.395 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:10:37.395 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:10:37.395 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.395 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.395 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:10:37.395 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:10:37.395 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:10:37.395 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:37.397 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:37.397 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # is_hw=yes 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:37.397 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:10:37.397 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:37.397 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:37.397 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:37.397 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:37.397 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:10:37.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:10:37.397 00:10:37.397 --- 10.0.0.2 ping statistics --- 00:10:37.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.398 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:37.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:10:37.398 00:10:37.398 --- 10.0.0.1 ping statistics --- 00:10:37.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.398 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # return 0 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.398 ************************************ 00:10:37.398 START TEST nvmf_filesystem_no_in_capsule 00:10:37.398 ************************************ 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=4119023 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 4119023 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 4119023 ']' 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.398 11:45:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.398 [2024-12-09 11:45:44.281245] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:10:37.398 [2024-12-09 11:45:44.281308] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.398 [2024-12-09 11:45:44.383282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.398 [2024-12-09 11:45:44.436311] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.398 [2024-12-09 11:45:44.436369] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.398 [2024-12-09 11:45:44.436378] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.398 [2024-12-09 11:45:44.436385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.398 [2024-12-09 11:45:44.436391] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.398 [2024-12-09 11:45:44.438432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.398 [2024-12-09 11:45:44.438561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.398 [2024-12-09 11:45:44.438711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.398 [2024-12-09 11:45:44.438711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.398 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.398 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:37.398 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:37.398 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:37.398 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.398 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.398 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:37.398 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:37.398 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.398 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.399 [2024-12-09 11:45:45.132894] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.399 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.399 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:37.399 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.399 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.399 Malloc1 00:10:37.399 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.399 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:37.399 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.399 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.399 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.399 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:37.399 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.399 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.399 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.399 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.399 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.399 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.399 [2024-12-09 11:45:45.271680] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.666 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.666 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:37.666 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:37.666 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:37.666 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:37.666 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:37.666 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:37.666 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.666 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.666 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.666 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:37.666 { 00:10:37.666 "name": "Malloc1", 00:10:37.666 "aliases": [ 00:10:37.666 "644117d0-5e49-45a0-9cc7-0a0d2f2ca3e9" 00:10:37.666 ], 00:10:37.666 "product_name": "Malloc disk", 00:10:37.666 "block_size": 512, 00:10:37.666 "num_blocks": 1048576, 00:10:37.666 "uuid": "644117d0-5e49-45a0-9cc7-0a0d2f2ca3e9", 00:10:37.666 "assigned_rate_limits": { 00:10:37.666 "rw_ios_per_sec": 0, 00:10:37.666 "rw_mbytes_per_sec": 0, 00:10:37.666 "r_mbytes_per_sec": 0, 00:10:37.666 "w_mbytes_per_sec": 0 00:10:37.666 }, 00:10:37.666 "claimed": true, 00:10:37.666 "claim_type": "exclusive_write", 00:10:37.666 "zoned": false, 00:10:37.666 "supported_io_types": { 00:10:37.666 "read": true, 00:10:37.666 "write": true, 00:10:37.666 "unmap": true, 00:10:37.666 "flush": true, 00:10:37.666 "reset": true, 00:10:37.666 "nvme_admin": false, 00:10:37.666 "nvme_io": false, 00:10:37.666 "nvme_io_md": false, 00:10:37.666 "write_zeroes": true, 00:10:37.666 "zcopy": true, 00:10:37.666 "get_zone_info": false, 00:10:37.666 "zone_management": false, 00:10:37.666 "zone_append": false, 00:10:37.666 "compare": false, 00:10:37.666 "compare_and_write": false, 00:10:37.666 "abort": true, 00:10:37.666 "seek_hole": false, 00:10:37.666 "seek_data": false, 00:10:37.666 "copy": true, 00:10:37.666 "nvme_iov_md": false 00:10:37.666 }, 00:10:37.666 "memory_domains": [ 00:10:37.666 { 00:10:37.666 "dma_device_id": "system", 00:10:37.666 "dma_device_type": 1 00:10:37.666 }, 00:10:37.666 { 00:10:37.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.666 "dma_device_type": 2 00:10:37.666 } 00:10:37.666 ], 00:10:37.666 "driver_specific": {} 00:10:37.666 } 00:10:37.666 ]' 00:10:37.666 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:37.666 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:37.666 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:37.666 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:37.666 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:37.667 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:37.667 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:37.667 11:45:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:39.579 11:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:39.579 11:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:39.579 11:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:39.579 11:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:39.579 11:45:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:41.496 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:41.496 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:41.496 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:41.496 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:41.496 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:41.496 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:41.496 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:41.496 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:41.496 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:41.496 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:41.496 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:41.496 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:41.496 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:41.496 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:41.496 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:41.496 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:41.496 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:41.757 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:41.757 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:43.142 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:43.142 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:43.142 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:43.142 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.142 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.142 ************************************ 00:10:43.142 START TEST filesystem_ext4 00:10:43.142 ************************************ 00:10:43.142 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:43.142 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:43.142 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:43.142 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:43.142 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:43.142 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:43.142 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:43.143 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:43.143 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:43.143 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:43.143 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:43.143 mke2fs 1.47.0 (5-Feb-2023) 00:10:43.143 Discarding device blocks: 0/522240 done 00:10:43.143 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:43.143 Filesystem UUID: a2164fea-7bcd-4656-a408-ae93308cf510 00:10:43.143 Superblock backups stored on blocks: 00:10:43.143 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:43.143 00:10:43.143 Allocating group tables: 0/64 done 00:10:43.143 Writing inode tables: 0/64 done 00:10:43.143 Creating journal (8192 blocks): done 00:10:45.047 Writing superblocks and filesystem accounting information: 0/64 done 00:10:45.047 00:10:45.047 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:45.047 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:50.329 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:50.329 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:50.329 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:50.329 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:50.329 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:50.329 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:50.329 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 4119023 00:10:50.329 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:50.329 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:50.329 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:50.329 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:50.329 00:10:50.329 real 0m7.501s 00:10:50.329 user 0m0.034s 00:10:50.329 sys 0m0.073s 00:10:50.329 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.329 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:50.329 ************************************ 00:10:50.329 END TEST filesystem_ext4 00:10:50.329 ************************************ 00:10:50.329 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:50.330 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:50.330 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.330 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.590 ************************************ 00:10:50.590 START TEST filesystem_btrfs 00:10:50.590 ************************************ 00:10:50.590 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:50.590 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:50.590 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:50.590 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:50.590 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:50.590 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:50.590 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:50.590 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:50.590 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:50.590 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:50.590 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:50.590 btrfs-progs v6.8.1 00:10:50.590 See https://btrfs.readthedocs.io for more information. 00:10:50.590 00:10:50.590 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:50.590 NOTE: several default settings have changed in version 5.15, please make sure 00:10:50.590 this does not affect your deployments: 00:10:50.590 - DUP for metadata (-m dup) 00:10:50.590 - enabled no-holes (-O no-holes) 00:10:50.590 - enabled free-space-tree (-R free-space-tree) 00:10:50.590 00:10:50.590 Label: (null) 00:10:50.590 UUID: 175a04d8-d112-4b07-9b2f-0d6935168af5 00:10:50.590 Node size: 16384 00:10:50.590 Sector size: 4096 (CPU page size: 4096) 00:10:50.590 Filesystem size: 510.00MiB 00:10:50.590 Block group profiles: 00:10:50.590 Data: single 8.00MiB 00:10:50.590 Metadata: DUP 32.00MiB 00:10:50.590 System: DUP 8.00MiB 00:10:50.590 SSD detected: yes 00:10:50.590 Zoned device: no 00:10:50.590 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:50.590 Checksum: crc32c 00:10:50.590 Number of devices: 1 00:10:50.590 Devices: 00:10:50.590 ID SIZE PATH 00:10:50.590 1 510.00MiB /dev/nvme0n1p1 00:10:50.590 00:10:50.590 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:50.590 11:45:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 4119023 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:51.532 00:10:51.532 real 0m1.032s 00:10:51.532 user 0m0.028s 00:10:51.532 sys 0m0.120s 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:51.532 ************************************ 00:10:51.532 END TEST filesystem_btrfs 00:10:51.532 ************************************ 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.532 ************************************ 00:10:51.532 START TEST filesystem_xfs 00:10:51.532 ************************************ 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:51.532 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:51.532 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:51.532 = sectsz=512 attr=2, projid32bit=1 00:10:51.532 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:51.532 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:51.532 data = bsize=4096 blocks=130560, imaxpct=25 00:10:51.532 = sunit=0 swidth=0 blks 00:10:51.532 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:51.532 log =internal log bsize=4096 blocks=16384, version=2 00:10:51.532 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:51.532 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:52.917 Discarding blocks...Done. 00:10:52.917 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:52.917 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:54.828 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:54.828 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:54.828 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:54.828 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:54.828 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:54.828 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:54.828 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 4119023 00:10:54.828 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:54.828 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:54.828 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:54.828 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:54.828 00:10:54.828 real 0m2.947s 00:10:54.828 user 0m0.024s 00:10:54.828 sys 0m0.083s 00:10:54.828 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.828 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:54.828 ************************************ 00:10:54.828 END TEST filesystem_xfs 00:10:54.828 ************************************ 00:10:54.828 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:54.828 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:54.828 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:55.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.088 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:55.088 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:55.088 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:55.088 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.088 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:55.088 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.088 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:55.088 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:55.088 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.088 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.088 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.088 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:55.088 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 4119023 00:10:55.088 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 4119023 ']' 00:10:55.088 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 4119023 00:10:55.088 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:55.088 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.088 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4119023 00:10:55.088 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:55.088 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:55.088 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4119023' 00:10:55.088 killing process with pid 4119023 00:10:55.088 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 4119023 00:10:55.088 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 4119023 00:10:55.349 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:55.349 00:10:55.349 real 0m18.867s 00:10:55.349 user 1m14.494s 00:10:55.349 sys 0m1.463s 00:10:55.349 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.350 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.350 ************************************ 00:10:55.350 END TEST nvmf_filesystem_no_in_capsule 00:10:55.350 ************************************ 00:10:55.350 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:55.350 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:55.350 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.350 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:55.350 ************************************ 00:10:55.350 START TEST nvmf_filesystem_in_capsule 00:10:55.350 ************************************ 00:10:55.350 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:55.350 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:55.350 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:55.350 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:55.350 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:55.350 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.350 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=4122929 00:10:55.350 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 4122929 00:10:55.350 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:55.350 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 4122929 ']' 00:10:55.350 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.350 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.350 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.350 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.350 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.350 [2024-12-09 11:46:03.227973] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:10:55.350 [2024-12-09 11:46:03.228029] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.610 [2024-12-09 11:46:03.318595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.610 [2024-12-09 11:46:03.349622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.610 [2024-12-09 11:46:03.349659] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.610 [2024-12-09 11:46:03.349665] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.610 [2024-12-09 11:46:03.349670] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.610 [2024-12-09 11:46:03.349674] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.610 [2024-12-09 11:46:03.351160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.610 [2024-12-09 11:46:03.351277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.610 [2024-12-09 11:46:03.351430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.610 [2024-12-09 11:46:03.351432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.180 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.180 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:56.180 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:56.180 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:56.180 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.440 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.440 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:56.440 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:56.440 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.440 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.440 [2024-12-09 11:46:04.077588] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.440 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.440 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:56.440 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.440 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.440 Malloc1 00:10:56.440 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.440 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:56.440 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.440 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.440 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.440 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:56.440 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.440 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.441 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.441 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.441 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.441 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.441 [2024-12-09 11:46:04.216496] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.441 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.441 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:56.441 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:56.441 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:56.441 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:56.441 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:56.441 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:56.441 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.441 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.441 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.441 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:56.441 { 00:10:56.441 "name": "Malloc1", 00:10:56.441 "aliases": [ 00:10:56.441 "cfc5e5cc-b10b-44c3-a5d5-b12acbf7d460" 00:10:56.441 ], 00:10:56.441 "product_name": "Malloc disk", 00:10:56.441 "block_size": 512, 00:10:56.441 "num_blocks": 1048576, 00:10:56.441 "uuid": "cfc5e5cc-b10b-44c3-a5d5-b12acbf7d460", 00:10:56.441 "assigned_rate_limits": { 00:10:56.441 "rw_ios_per_sec": 0, 00:10:56.441 "rw_mbytes_per_sec": 0, 00:10:56.441 "r_mbytes_per_sec": 0, 00:10:56.441 "w_mbytes_per_sec": 0 00:10:56.441 }, 00:10:56.441 "claimed": true, 00:10:56.441 "claim_type": "exclusive_write", 00:10:56.441 "zoned": false, 00:10:56.441 "supported_io_types": { 00:10:56.441 "read": true, 00:10:56.441 "write": true, 00:10:56.441 "unmap": true, 00:10:56.441 "flush": true, 00:10:56.441 "reset": true, 00:10:56.441 "nvme_admin": false, 00:10:56.441 "nvme_io": false, 00:10:56.441 "nvme_io_md": false, 00:10:56.441 "write_zeroes": true, 00:10:56.441 "zcopy": true, 00:10:56.441 "get_zone_info": false, 00:10:56.441 "zone_management": false, 00:10:56.441 "zone_append": false, 00:10:56.441 "compare": false, 00:10:56.441 "compare_and_write": false, 00:10:56.441 "abort": true, 00:10:56.441 "seek_hole": false, 00:10:56.441 "seek_data": false, 00:10:56.441 "copy": true, 00:10:56.441 "nvme_iov_md": false 00:10:56.441 }, 00:10:56.441 "memory_domains": [ 00:10:56.441 { 00:10:56.441 "dma_device_id": "system", 00:10:56.441 "dma_device_type": 1 00:10:56.441 }, 00:10:56.441 { 00:10:56.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.441 "dma_device_type": 2 00:10:56.441 } 00:10:56.441 ], 00:10:56.441 "driver_specific": {} 00:10:56.441 } 00:10:56.441 ]' 00:10:56.441 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:56.441 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:56.441 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:56.701 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:56.701 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:56.701 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:56.701 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:56.701 11:46:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:58.083 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:58.083 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:58.083 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:58.083 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:58.083 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:00.624 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:00.624 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:00.624 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:00.624 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:00.624 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:00.624 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:00.624 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:00.624 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:00.624 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:00.624 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:00.624 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:00.624 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:00.624 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:00.624 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:00.624 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:00.624 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:00.624 11:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:00.624 11:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:00.624 11:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:02.006 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:02.006 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:02.006 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:02.006 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.006 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.006 ************************************ 00:11:02.006 START TEST filesystem_in_capsule_ext4 00:11:02.006 ************************************ 00:11:02.006 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:02.007 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:02.007 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:02.007 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:02.007 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:02.007 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:02.007 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:02.007 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:02.007 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:02.007 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:02.007 11:46:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:02.007 mke2fs 1.47.0 (5-Feb-2023) 00:11:02.007 Discarding device blocks: 0/522240 done 00:11:02.007 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:02.007 Filesystem UUID: 1ae1d403-bd51-4f78-a7df-a2bc969c1deb 00:11:02.007 Superblock backups stored on blocks: 00:11:02.007 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:02.007 00:11:02.007 Allocating group tables: 0/64 done 00:11:02.007 Writing inode tables: 0/64 done 00:11:02.267 Creating journal (8192 blocks): done 00:11:04.484 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:11:04.484 00:11:04.484 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:04.484 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 4122929 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:11.069 00:11:11.069 real 0m9.042s 00:11:11.069 user 0m0.029s 00:11:11.069 sys 0m0.080s 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:11.069 ************************************ 00:11:11.069 END TEST filesystem_in_capsule_ext4 00:11:11.069 ************************************ 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.069 ************************************ 00:11:11.069 START TEST filesystem_in_capsule_btrfs 00:11:11.069 ************************************ 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:11.069 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:11.069 btrfs-progs v6.8.1 00:11:11.069 See https://btrfs.readthedocs.io for more information. 00:11:11.069 00:11:11.069 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:11.069 NOTE: several default settings have changed in version 5.15, please make sure 00:11:11.069 this does not affect your deployments: 00:11:11.069 - DUP for metadata (-m dup) 00:11:11.069 - enabled no-holes (-O no-holes) 00:11:11.069 - enabled free-space-tree (-R free-space-tree) 00:11:11.069 00:11:11.069 Label: (null) 00:11:11.069 UUID: e1a592b7-074f-452e-b5a8-3a7d7a7a0aaf 00:11:11.069 Node size: 16384 00:11:11.069 Sector size: 4096 (CPU page size: 4096) 00:11:11.069 Filesystem size: 510.00MiB 00:11:11.070 Block group profiles: 00:11:11.070 Data: single 8.00MiB 00:11:11.070 Metadata: DUP 32.00MiB 00:11:11.070 System: DUP 8.00MiB 00:11:11.070 SSD detected: yes 00:11:11.070 Zoned device: no 00:11:11.070 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:11.070 Checksum: crc32c 00:11:11.070 Number of devices: 1 00:11:11.070 Devices: 00:11:11.070 ID SIZE PATH 00:11:11.070 1 510.00MiB /dev/nvme0n1p1 00:11:11.070 00:11:11.070 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:11.070 11:46:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:11.640 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:11.640 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 4122929 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:11.641 00:11:11.641 real 0m0.681s 00:11:11.641 user 0m0.031s 00:11:11.641 sys 0m0.116s 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:11.641 ************************************ 00:11:11.641 END TEST filesystem_in_capsule_btrfs 00:11:11.641 ************************************ 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.641 ************************************ 00:11:11.641 START TEST filesystem_in_capsule_xfs 00:11:11.641 ************************************ 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:11.641 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:11.641 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:11.641 = sectsz=512 attr=2, projid32bit=1 00:11:11.641 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:11.641 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:11.641 data = bsize=4096 blocks=130560, imaxpct=25 00:11:11.641 = sunit=0 swidth=0 blks 00:11:11.641 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:11.641 log =internal log bsize=4096 blocks=16384, version=2 00:11:11.641 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:11.641 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:12.583 Discarding blocks...Done. 00:11:12.583 11:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:12.583 11:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:15.297 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:15.297 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:15.297 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:15.298 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:15.298 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:15.298 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:15.298 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 4122929 00:11:15.298 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:15.298 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:15.298 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:15.298 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:15.298 00:11:15.298 real 0m3.376s 00:11:15.298 user 0m0.026s 00:11:15.298 sys 0m0.080s 00:11:15.298 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.298 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:15.298 ************************************ 00:11:15.298 END TEST filesystem_in_capsule_xfs 00:11:15.298 ************************************ 00:11:15.298 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:15.298 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:15.298 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.575 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:15.575 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:15.575 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:15.575 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.575 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:15.575 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.575 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:15.575 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.575 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.575 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.575 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.575 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:15.575 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 4122929 00:11:15.575 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 4122929 ']' 00:11:15.575 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 4122929 00:11:15.575 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:15.575 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.575 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4122929 00:11:15.575 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.575 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.575 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4122929' 00:11:15.575 killing process with pid 4122929 00:11:15.575 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 4122929 00:11:15.575 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 4122929 00:11:15.863 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:15.863 00:11:15.863 real 0m20.319s 00:11:15.863 user 1m20.389s 00:11:15.863 sys 0m1.477s 00:11:15.863 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.863 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.863 ************************************ 00:11:15.863 END TEST nvmf_filesystem_in_capsule 00:11:15.863 ************************************ 00:11:15.863 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:15.863 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:15.863 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # sync 00:11:15.863 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:11:15.863 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # set +e 00:11:15.863 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # for i in {1..20} 00:11:15.863 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:11:15.863 rmmod nvme_tcp 00:11:15.863 rmmod nvme_fabrics 00:11:15.863 rmmod nvme_keyring 00:11:15.863 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:11:15.863 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # set -e 00:11:15.863 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@130 -- # return 0 00:11:15.863 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:11:15.863 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:15.863 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:15.863 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:15.863 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # iptr 00:11:15.863 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-save 00:11:15.863 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:15.863 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-restore 00:11:15.863 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:15.863 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # remove_spdk_ns 00:11:15.864 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.864 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.864 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.801 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:11:17.801 00:11:17.801 real 0m49.167s 00:11:17.801 user 2m37.055s 00:11:17.801 sys 0m8.707s 00:11:17.801 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.801 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.801 ************************************ 00:11:17.801 END TEST nvmf_filesystem 00:11:17.801 ************************************ 00:11:18.062 11:46:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:18.062 11:46:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:18.062 11:46:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.062 11:46:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:18.062 ************************************ 00:11:18.062 START TEST nvmf_target_discovery 00:11:18.062 ************************************ 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:18.063 * Looking for test storage... 00:11:18.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.063 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:18.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.324 --rc genhtml_branch_coverage=1 00:11:18.324 --rc genhtml_function_coverage=1 00:11:18.324 --rc genhtml_legend=1 00:11:18.324 --rc geninfo_all_blocks=1 00:11:18.324 --rc geninfo_unexecuted_blocks=1 00:11:18.324 00:11:18.324 ' 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:18.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.324 --rc genhtml_branch_coverage=1 00:11:18.324 --rc genhtml_function_coverage=1 00:11:18.324 --rc genhtml_legend=1 00:11:18.324 --rc geninfo_all_blocks=1 00:11:18.324 --rc geninfo_unexecuted_blocks=1 00:11:18.324 00:11:18.324 ' 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:18.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.324 --rc genhtml_branch_coverage=1 00:11:18.324 --rc genhtml_function_coverage=1 00:11:18.324 --rc genhtml_legend=1 00:11:18.324 --rc geninfo_all_blocks=1 00:11:18.324 --rc geninfo_unexecuted_blocks=1 00:11:18.324 00:11:18.324 ' 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:18.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.324 --rc genhtml_branch_coverage=1 00:11:18.324 --rc genhtml_function_coverage=1 00:11:18.324 --rc genhtml_legend=1 00:11:18.324 --rc geninfo_all_blocks=1 00:11:18.324 --rc geninfo_unexecuted_blocks=1 00:11:18.324 00:11:18.324 ' 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.324 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # : 0 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:11:18.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@56 -- # have_pci_nics=0 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # xtrace_disable 00:11:18.325 11:46:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_devs=() 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_devs 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_net_devs=() 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # pci_drivers=() 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # local -A pci_drivers 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # net_devs=() 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga net_devs 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # e810=() 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga e810 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # x722=() 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga x722 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@323 -- # mlx=() 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@323 -- # local -ga mlx 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:26.468 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:26.468 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.468 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:26.469 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:26.469 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.469 11:46:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:11:26.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:11:26.469 00:11:26.469 --- 10.0.0.2 ping statistics --- 00:11:26.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.469 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:11:26.469 00:11:26.469 --- 10.0.0.1 ping statistics --- 00:11:26.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.469 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # return 0 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # nvmfpid=4131223 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # waitforlisten 4131223 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 4131223 ']' 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.469 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.469 [2024-12-09 11:46:33.286813] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:11:26.469 [2024-12-09 11:46:33.286882] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.469 [2024-12-09 11:46:33.388305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.469 [2024-12-09 11:46:33.440567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.469 [2024-12-09 11:46:33.440625] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.469 [2024-12-09 11:46:33.440634] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.469 [2024-12-09 11:46:33.440649] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.469 [2024-12-09 11:46:33.440656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.469 [2024-12-09 11:46:33.442697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.469 [2024-12-09 11:46:33.442849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.469 [2024-12-09 11:46:33.443017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.469 [2024-12-09 11:46:33.443019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.469 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.469 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:26.469 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:26.469 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:26.469 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.469 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.469 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:26.469 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.469 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.469 [2024-12-09 11:46:34.153120] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.469 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.469 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:26.469 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:26.469 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:26.469 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.470 Null1 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.470 [2024-12-09 11:46:34.225821] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.470 Null2 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.470 Null3 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.470 Null4 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.470 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.730 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.730 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:26.730 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.730 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.730 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.730 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:26.730 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.730 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.730 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.730 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:26.730 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.730 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.730 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.730 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:26.730 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.730 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.730 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.730 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:11:26.730 00:11:26.730 Discovery Log Number of Records 6, Generation counter 6 00:11:26.731 =====Discovery Log Entry 0====== 00:11:26.731 trtype: tcp 00:11:26.731 adrfam: ipv4 00:11:26.731 subtype: current discovery subsystem 00:11:26.731 treq: not required 00:11:26.731 portid: 0 00:11:26.731 trsvcid: 4420 00:11:26.731 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:26.731 traddr: 10.0.0.2 00:11:26.731 eflags: explicit discovery connections, duplicate discovery information 00:11:26.731 sectype: none 00:11:26.731 =====Discovery Log Entry 1====== 00:11:26.731 trtype: tcp 00:11:26.731 adrfam: ipv4 00:11:26.731 subtype: nvme subsystem 00:11:26.731 treq: not required 00:11:26.731 portid: 0 00:11:26.731 trsvcid: 4420 00:11:26.731 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:26.731 traddr: 10.0.0.2 00:11:26.731 eflags: none 00:11:26.731 sectype: none 00:11:26.731 =====Discovery Log Entry 2====== 00:11:26.731 trtype: tcp 00:11:26.731 adrfam: ipv4 00:11:26.731 subtype: nvme subsystem 00:11:26.731 treq: not required 00:11:26.731 portid: 0 00:11:26.731 trsvcid: 4420 00:11:26.731 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:26.731 traddr: 10.0.0.2 00:11:26.731 eflags: none 00:11:26.731 sectype: none 00:11:26.731 =====Discovery Log Entry 3====== 00:11:26.731 trtype: tcp 00:11:26.731 adrfam: ipv4 00:11:26.731 subtype: nvme subsystem 00:11:26.731 treq: not required 00:11:26.731 portid: 0 00:11:26.731 trsvcid: 4420 00:11:26.731 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:26.731 traddr: 10.0.0.2 00:11:26.731 eflags: none 00:11:26.731 sectype: none 00:11:26.731 =====Discovery Log Entry 4====== 00:11:26.731 trtype: tcp 00:11:26.731 adrfam: ipv4 00:11:26.731 subtype: nvme subsystem 00:11:26.731 treq: not required 00:11:26.731 portid: 0 00:11:26.731 trsvcid: 4420 00:11:26.731 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:26.731 traddr: 10.0.0.2 00:11:26.731 eflags: none 00:11:26.731 sectype: none 00:11:26.731 =====Discovery Log Entry 5====== 00:11:26.731 trtype: tcp 00:11:26.731 adrfam: ipv4 00:11:26.731 subtype: discovery subsystem referral 00:11:26.731 treq: not required 00:11:26.731 portid: 0 00:11:26.731 trsvcid: 4430 00:11:26.731 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:26.731 traddr: 10.0.0.2 00:11:26.731 eflags: none 00:11:26.731 sectype: none 00:11:26.731 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:26.731 Perform nvmf subsystem discovery via RPC 00:11:26.731 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:26.731 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.731 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.731 [ 00:11:26.731 { 00:11:26.731 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:26.731 "subtype": "Discovery", 00:11:26.731 "listen_addresses": [ 00:11:26.731 { 00:11:26.731 "trtype": "TCP", 00:11:26.731 "adrfam": "IPv4", 00:11:26.731 "traddr": "10.0.0.2", 00:11:26.731 "trsvcid": "4420" 00:11:26.731 } 00:11:26.731 ], 00:11:26.731 "allow_any_host": true, 00:11:26.731 "hosts": [] 00:11:26.731 }, 00:11:26.731 { 00:11:26.731 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:26.731 "subtype": "NVMe", 00:11:26.731 "listen_addresses": [ 00:11:26.731 { 00:11:26.731 "trtype": "TCP", 00:11:26.731 "adrfam": "IPv4", 00:11:26.731 "traddr": "10.0.0.2", 00:11:26.731 "trsvcid": "4420" 00:11:26.731 } 00:11:26.731 ], 00:11:26.731 "allow_any_host": true, 00:11:26.731 "hosts": [], 00:11:26.731 "serial_number": "SPDK00000000000001", 00:11:26.731 "model_number": "SPDK bdev Controller", 00:11:26.731 "max_namespaces": 32, 00:11:26.731 "min_cntlid": 1, 00:11:26.731 "max_cntlid": 65519, 00:11:26.731 "namespaces": [ 00:11:26.731 { 00:11:26.731 "nsid": 1, 00:11:26.731 "bdev_name": "Null1", 00:11:26.731 "name": "Null1", 00:11:26.731 "nguid": "6C42BA8A73484F26B4DC2C4A2CA65842", 00:11:26.731 "uuid": "6c42ba8a-7348-4f26-b4dc-2c4a2ca65842" 00:11:26.731 } 00:11:26.731 ] 00:11:26.731 }, 00:11:26.731 { 00:11:26.731 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:26.731 "subtype": "NVMe", 00:11:26.731 "listen_addresses": [ 00:11:26.731 { 00:11:26.731 "trtype": "TCP", 00:11:26.731 "adrfam": "IPv4", 00:11:26.731 "traddr": "10.0.0.2", 00:11:26.731 "trsvcid": "4420" 00:11:26.731 } 00:11:26.731 ], 00:11:26.731 "allow_any_host": true, 00:11:26.731 "hosts": [], 00:11:26.731 "serial_number": "SPDK00000000000002", 00:11:26.731 "model_number": "SPDK bdev Controller", 00:11:26.731 "max_namespaces": 32, 00:11:26.731 "min_cntlid": 1, 00:11:26.731 "max_cntlid": 65519, 00:11:26.731 "namespaces": [ 00:11:26.731 { 00:11:26.731 "nsid": 1, 00:11:26.731 "bdev_name": "Null2", 00:11:26.731 "name": "Null2", 00:11:26.731 "nguid": "A627C3E6A7B14F7ABBBC121EBFCB24D1", 00:11:26.731 "uuid": "a627c3e6-a7b1-4f7a-bbbc-121ebfcb24d1" 00:11:26.731 } 00:11:26.731 ] 00:11:26.731 }, 00:11:26.731 { 00:11:26.731 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:26.731 "subtype": "NVMe", 00:11:26.731 "listen_addresses": [ 00:11:26.731 { 00:11:26.731 "trtype": "TCP", 00:11:26.731 "adrfam": "IPv4", 00:11:26.731 "traddr": "10.0.0.2", 00:11:26.731 "trsvcid": "4420" 00:11:26.731 } 00:11:26.731 ], 00:11:26.731 "allow_any_host": true, 00:11:26.731 "hosts": [], 00:11:26.731 "serial_number": "SPDK00000000000003", 00:11:26.731 "model_number": "SPDK bdev Controller", 00:11:26.731 "max_namespaces": 32, 00:11:26.731 "min_cntlid": 1, 00:11:26.731 "max_cntlid": 65519, 00:11:26.731 "namespaces": [ 00:11:26.731 { 00:11:26.731 "nsid": 1, 00:11:26.731 "bdev_name": "Null3", 00:11:26.731 "name": "Null3", 00:11:26.731 "nguid": "992F8D642E4E4AAB9AC356BB8532CE13", 00:11:26.731 "uuid": "992f8d64-2e4e-4aab-9ac3-56bb8532ce13" 00:11:26.731 } 00:11:26.731 ] 00:11:26.731 }, 00:11:26.731 { 00:11:26.731 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:26.731 "subtype": "NVMe", 00:11:26.731 "listen_addresses": [ 00:11:26.731 { 00:11:26.731 "trtype": "TCP", 00:11:26.731 "adrfam": "IPv4", 00:11:26.731 "traddr": "10.0.0.2", 00:11:26.731 "trsvcid": "4420" 00:11:26.731 } 00:11:26.731 ], 00:11:26.731 "allow_any_host": true, 00:11:26.731 "hosts": [], 00:11:26.731 "serial_number": "SPDK00000000000004", 00:11:26.731 "model_number": "SPDK bdev Controller", 00:11:26.731 "max_namespaces": 32, 00:11:26.731 "min_cntlid": 1, 00:11:26.731 "max_cntlid": 65519, 00:11:26.731 "namespaces": [ 00:11:26.731 { 00:11:26.731 "nsid": 1, 00:11:26.731 "bdev_name": "Null4", 00:11:26.731 "name": "Null4", 00:11:26.731 "nguid": "A06B4D2841324C43A937525AF7BE73B7", 00:11:26.731 "uuid": "a06b4d28-4132-4c43-a937-525af7be73b7" 00:11:26.731 } 00:11:26.731 ] 00:11:26.731 } 00:11:26.731 ] 00:11:26.731 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.731 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:26.731 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.731 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.731 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.731 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # sync 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # set +e 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # for i in {1..20} 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:11:26.993 rmmod nvme_tcp 00:11:26.993 rmmod nvme_fabrics 00:11:26.993 rmmod nvme_keyring 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # set -e 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@130 -- # return 0 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@513 -- # '[' -n 4131223 ']' 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # killprocess 4131223 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 4131223 ']' 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 4131223 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:26.993 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4131223 00:11:27.253 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:27.253 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:27.253 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4131223' 00:11:27.253 killing process with pid 4131223 00:11:27.253 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 4131223 00:11:27.253 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 4131223 00:11:27.254 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:27.254 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:27.254 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:27.254 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # iptr 00:11:27.254 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-save 00:11:27.254 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:27.254 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:11:27.254 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:27.254 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # remove_spdk_ns 00:11:27.254 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.254 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.254 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:11:29.799 00:11:29.799 real 0m11.328s 00:11:29.799 user 0m8.413s 00:11:29.799 sys 0m6.025s 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:29.799 ************************************ 00:11:29.799 END TEST nvmf_target_discovery 00:11:29.799 ************************************ 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:29.799 ************************************ 00:11:29.799 START TEST nvmf_referrals 00:11:29.799 ************************************ 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:29.799 * Looking for test storage... 00:11:29.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:29.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.799 --rc genhtml_branch_coverage=1 00:11:29.799 --rc genhtml_function_coverage=1 00:11:29.799 --rc genhtml_legend=1 00:11:29.799 --rc geninfo_all_blocks=1 00:11:29.799 --rc geninfo_unexecuted_blocks=1 00:11:29.799 00:11:29.799 ' 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:29.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.799 --rc genhtml_branch_coverage=1 00:11:29.799 --rc genhtml_function_coverage=1 00:11:29.799 --rc genhtml_legend=1 00:11:29.799 --rc geninfo_all_blocks=1 00:11:29.799 --rc geninfo_unexecuted_blocks=1 00:11:29.799 00:11:29.799 ' 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:29.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.799 --rc genhtml_branch_coverage=1 00:11:29.799 --rc genhtml_function_coverage=1 00:11:29.799 --rc genhtml_legend=1 00:11:29.799 --rc geninfo_all_blocks=1 00:11:29.799 --rc geninfo_unexecuted_blocks=1 00:11:29.799 00:11:29.799 ' 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:29.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.799 --rc genhtml_branch_coverage=1 00:11:29.799 --rc genhtml_function_coverage=1 00:11:29.799 --rc genhtml_legend=1 00:11:29.799 --rc geninfo_all_blocks=1 00:11:29.799 --rc geninfo_unexecuted_blocks=1 00:11:29.799 00:11:29.799 ' 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.799 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # : 0 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:11:29.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@56 -- # have_pci_nics=0 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # xtrace_disable 00:11:29.800 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_devs=() 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_devs 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_net_devs=() 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # pci_drivers=() 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # local -A pci_drivers 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # net_devs=() 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga net_devs 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # e810=() 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga e810 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # x722=() 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga x722 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@323 -- # mlx=() 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@323 -- # local -ga mlx 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:37.945 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:37.945 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.945 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:37.946 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:37.946 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # is_hw=yes 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:11:37.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:11:37.946 00:11:37.946 --- 10.0.0.2 ping statistics --- 00:11:37.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.946 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:37.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:11:37.946 00:11:37.946 --- 10.0.0.1 ping statistics --- 00:11:37.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.946 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # return 0 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # nvmfpid=4135681 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # waitforlisten 4135681 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 4135681 ']' 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.946 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.946 [2024-12-09 11:46:44.957667] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:11:37.946 [2024-12-09 11:46:44.957736] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.946 [2024-12-09 11:46:45.056369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.946 [2024-12-09 11:46:45.109135] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.946 [2024-12-09 11:46:45.109193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.946 [2024-12-09 11:46:45.109202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.946 [2024-12-09 11:46:45.109209] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.946 [2024-12-09 11:46:45.109215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.946 [2024-12-09 11:46:45.111617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.946 [2024-12-09 11:46:45.111766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.946 [2024-12-09 11:46:45.112017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.946 [2024-12-09 11:46:45.112017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.946 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.946 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:37.946 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:37.946 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:37.946 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.946 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.946 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:37.946 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.946 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.946 [2024-12-09 11:46:45.814589] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.946 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.947 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:37.947 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.947 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.207 [2024-12-09 11:46:45.843784] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:38.207 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:38.468 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:38.728 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:38.729 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:38.989 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:38.989 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:38.989 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:38.989 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:38.989 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:38.989 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:38.989 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:39.251 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:39.251 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:39.251 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:39.251 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:39.251 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:39.251 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:39.251 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:39.251 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:39.251 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.251 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.251 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.251 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:39.251 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:39.251 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:39.251 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:39.251 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.251 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:39.251 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.251 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.511 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:39.511 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:39.511 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:39.511 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:39.511 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:39.512 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:39.512 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:39.512 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:39.512 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:39.512 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:39.512 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:39.512 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:39.512 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:39.512 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:39.512 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:39.772 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:39.772 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:39.772 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:39.772 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:39.772 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:39.772 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:40.032 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:40.032 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:40.032 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.032 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.032 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.032 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:40.032 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:40.032 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.032 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.032 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.032 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:40.032 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:40.032 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:40.032 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:40.032 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:40.032 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.032 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:40.293 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:40.293 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:40.293 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:40.293 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:40.293 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:40.293 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # sync 00:11:40.293 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:11:40.293 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # set +e 00:11:40.293 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # for i in {1..20} 00:11:40.293 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:11:40.293 rmmod nvme_tcp 00:11:40.293 rmmod nvme_fabrics 00:11:40.293 rmmod nvme_keyring 00:11:40.293 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:11:40.294 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # set -e 00:11:40.294 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@130 -- # return 0 00:11:40.294 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@513 -- # '[' -n 4135681 ']' 00:11:40.294 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # killprocess 4135681 00:11:40.294 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 4135681 ']' 00:11:40.294 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 4135681 00:11:40.294 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:40.294 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:40.294 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4135681 00:11:40.294 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:40.294 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:40.294 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4135681' 00:11:40.294 killing process with pid 4135681 00:11:40.294 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 4135681 00:11:40.294 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 4135681 00:11:40.555 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:40.555 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:40.555 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:40.555 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # iptr 00:11:40.555 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-save 00:11:40.555 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:40.555 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-restore 00:11:40.555 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:40.555 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # remove_spdk_ns 00:11:40.555 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.555 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.555 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.470 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:11:42.470 00:11:42.470 real 0m13.154s 00:11:42.470 user 0m15.643s 00:11:42.470 sys 0m6.493s 00:11:42.470 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.470 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.470 ************************************ 00:11:42.470 END TEST nvmf_referrals 00:11:42.470 ************************************ 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:42.732 ************************************ 00:11:42.732 START TEST nvmf_connect_disconnect 00:11:42.732 ************************************ 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:42.732 * Looking for test storage... 00:11:42.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.732 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:42.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.733 --rc genhtml_branch_coverage=1 00:11:42.733 --rc genhtml_function_coverage=1 00:11:42.733 --rc genhtml_legend=1 00:11:42.733 --rc geninfo_all_blocks=1 00:11:42.733 --rc geninfo_unexecuted_blocks=1 00:11:42.733 00:11:42.733 ' 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:42.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.733 --rc genhtml_branch_coverage=1 00:11:42.733 --rc genhtml_function_coverage=1 00:11:42.733 --rc genhtml_legend=1 00:11:42.733 --rc geninfo_all_blocks=1 00:11:42.733 --rc geninfo_unexecuted_blocks=1 00:11:42.733 00:11:42.733 ' 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:42.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.733 --rc genhtml_branch_coverage=1 00:11:42.733 --rc genhtml_function_coverage=1 00:11:42.733 --rc genhtml_legend=1 00:11:42.733 --rc geninfo_all_blocks=1 00:11:42.733 --rc geninfo_unexecuted_blocks=1 00:11:42.733 00:11:42.733 ' 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:42.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.733 --rc genhtml_branch_coverage=1 00:11:42.733 --rc genhtml_function_coverage=1 00:11:42.733 --rc genhtml_legend=1 00:11:42.733 --rc geninfo_all_blocks=1 00:11:42.733 --rc geninfo_unexecuted_blocks=1 00:11:42.733 00:11:42.733 ' 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # : 0 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:11:42.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@56 -- # have_pci_nics=0 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # xtrace_disable 00:11:42.733 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_devs=() 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_devs 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_net_devs=() 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # pci_drivers=() 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # local -A pci_drivers 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # net_devs=() 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga net_devs 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # e810=() 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga e810 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # x722=() 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga x722 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # mlx=() 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # local -ga mlx 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:50.881 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:50.881 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:50.881 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:50.881 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:50.881 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:11:50.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:11:50.882 00:11:50.882 --- 10.0.0.2 ping statistics --- 00:11:50.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.882 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:50.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:11:50.882 00:11:50.882 --- 10.0.0.1 ping statistics --- 00:11:50.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.882 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # return 0 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # nvmfpid=4140600 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # waitforlisten 4140600 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 4140600 ']' 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.882 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:50.882 [2024-12-09 11:46:57.676650] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:11:50.882 [2024-12-09 11:46:57.676724] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.882 [2024-12-09 11:46:57.777257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:50.882 [2024-12-09 11:46:57.830570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.882 [2024-12-09 11:46:57.830623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.882 [2024-12-09 11:46:57.830631] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.882 [2024-12-09 11:46:57.830647] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.882 [2024-12-09 11:46:57.830654] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.882 [2024-12-09 11:46:57.832995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.882 [2024-12-09 11:46:57.833125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.882 [2024-12-09 11:46:57.833274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.882 [2024-12-09 11:46:57.833274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.882 [2024-12-09 11:46:58.526738] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.882 [2024-12-09 11:46:58.594983] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:50.882 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:50.883 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:55.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.184 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:09.184 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:09.185 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:09.185 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # sync 00:12:09.185 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:12:09.185 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # set +e 00:12:09.185 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # for i in {1..20} 00:12:09.185 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:12:09.185 rmmod nvme_tcp 00:12:09.185 rmmod nvme_fabrics 00:12:09.185 rmmod nvme_keyring 00:12:09.185 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:12:09.185 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # set -e 00:12:09.185 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@130 -- # return 0 00:12:09.185 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@513 -- # '[' -n 4140600 ']' 00:12:09.185 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # killprocess 4140600 00:12:09.185 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 4140600 ']' 00:12:09.185 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 4140600 00:12:09.185 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:09.185 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:09.185 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4140600 00:12:09.185 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:09.185 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:09.185 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4140600' 00:12:09.185 killing process with pid 4140600 00:12:09.185 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 4140600 00:12:09.185 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 4140600 00:12:09.445 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:09.445 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:09.445 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:09.445 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # iptr 00:12:09.445 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:12:09.445 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:09.445 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:12:09.445 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:09.445 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # remove_spdk_ns 00:12:09.445 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.445 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.445 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.357 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:12:11.357 00:12:11.357 real 0m28.749s 00:12:11.357 user 1m18.585s 00:12:11.357 sys 0m6.808s 00:12:11.357 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.357 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.357 ************************************ 00:12:11.357 END TEST nvmf_connect_disconnect 00:12:11.357 ************************************ 00:12:11.358 11:47:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:11.358 11:47:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:11.358 11:47:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.358 11:47:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:11.358 ************************************ 00:12:11.358 START TEST nvmf_multitarget 00:12:11.358 ************************************ 00:12:11.358 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:11.620 * Looking for test storage... 00:12:11.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:11.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.620 --rc genhtml_branch_coverage=1 00:12:11.620 --rc genhtml_function_coverage=1 00:12:11.620 --rc genhtml_legend=1 00:12:11.620 --rc geninfo_all_blocks=1 00:12:11.620 --rc geninfo_unexecuted_blocks=1 00:12:11.620 00:12:11.620 ' 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:11.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.620 --rc genhtml_branch_coverage=1 00:12:11.620 --rc genhtml_function_coverage=1 00:12:11.620 --rc genhtml_legend=1 00:12:11.620 --rc geninfo_all_blocks=1 00:12:11.620 --rc geninfo_unexecuted_blocks=1 00:12:11.620 00:12:11.620 ' 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:11.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.620 --rc genhtml_branch_coverage=1 00:12:11.620 --rc genhtml_function_coverage=1 00:12:11.620 --rc genhtml_legend=1 00:12:11.620 --rc geninfo_all_blocks=1 00:12:11.620 --rc geninfo_unexecuted_blocks=1 00:12:11.620 00:12:11.620 ' 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:11.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.620 --rc genhtml_branch_coverage=1 00:12:11.620 --rc genhtml_function_coverage=1 00:12:11.620 --rc genhtml_legend=1 00:12:11.620 --rc geninfo_all_blocks=1 00:12:11.620 --rc geninfo_unexecuted_blocks=1 00:12:11.620 00:12:11.620 ' 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:11.620 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # : 0 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:12:11.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@56 -- # have_pci_nics=0 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # xtrace_disable 00:12:11.621 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_devs=() 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_devs 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_net_devs=() 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # pci_drivers=() 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # local -A pci_drivers 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # net_devs=() 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga net_devs 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # e810=() 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga e810 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # x722=() 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga x722 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@323 -- # mlx=() 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@323 -- # local -ga mlx 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:19.764 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:19.764 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:19.764 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:19.764 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # is_hw=yes 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:19.764 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:12:19.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:12:19.765 00:12:19.765 --- 10.0.0.2 ping statistics --- 00:12:19.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.765 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:19.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:12:19.765 00:12:19.765 --- 10.0.0.1 ping statistics --- 00:12:19.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.765 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # return 0 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # nvmfpid=4148498 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # waitforlisten 4148498 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 4148498 ']' 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.765 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:19.765 [2024-12-09 11:47:26.917776] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:12:19.765 [2024-12-09 11:47:26.917845] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.765 [2024-12-09 11:47:27.021070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:19.765 [2024-12-09 11:47:27.075338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.765 [2024-12-09 11:47:27.075394] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.765 [2024-12-09 11:47:27.075403] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:19.765 [2024-12-09 11:47:27.075410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:19.765 [2024-12-09 11:47:27.075416] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.765 [2024-12-09 11:47:27.077455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.765 [2024-12-09 11:47:27.077587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.765 [2024-12-09 11:47:27.077754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.765 [2024-12-09 11:47:27.077957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.025 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.025 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:20.025 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:20.025 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:20.025 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:20.025 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.025 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:20.025 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:20.025 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:20.025 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:20.025 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:20.285 "nvmf_tgt_1" 00:12:20.285 11:47:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:20.285 "nvmf_tgt_2" 00:12:20.285 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:20.285 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:20.546 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:20.546 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:20.546 true 00:12:20.546 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:20.546 true 00:12:20.546 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:20.546 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # sync 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # set +e 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # for i in {1..20} 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:12:20.808 rmmod nvme_tcp 00:12:20.808 rmmod nvme_fabrics 00:12:20.808 rmmod nvme_keyring 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # set -e 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@130 -- # return 0 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@513 -- # '[' -n 4148498 ']' 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # killprocess 4148498 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 4148498 ']' 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 4148498 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4148498 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4148498' 00:12:20.808 killing process with pid 4148498 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 4148498 00:12:20.808 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 4148498 00:12:21.069 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:21.069 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:21.069 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:21.069 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # iptr 00:12:21.069 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-save 00:12:21.070 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:21.070 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-restore 00:12:21.070 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:21.070 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # remove_spdk_ns 00:12:21.070 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.070 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.070 11:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.986 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:12:22.986 00:12:22.986 real 0m11.611s 00:12:22.986 user 0m9.742s 00:12:22.986 sys 0m6.035s 00:12:22.986 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.986 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:22.986 ************************************ 00:12:22.986 END TEST nvmf_multitarget 00:12:22.986 ************************************ 00:12:23.246 11:47:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:23.246 11:47:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:23.246 11:47:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.246 11:47:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:23.246 ************************************ 00:12:23.246 START TEST nvmf_rpc 00:12:23.246 ************************************ 00:12:23.246 11:47:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:23.247 * Looking for test storage... 00:12:23.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:23.247 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:23.247 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:23.247 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:23.247 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:23.247 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:23.247 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:23.247 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:23.247 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:23.247 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:23.247 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:23.247 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:23.247 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:23.247 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:23.247 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:23.247 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:23.247 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:23.247 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:23.247 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:23.247 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:23.247 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:23.247 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:23.247 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:23.247 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:23.509 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:23.509 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:23.509 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:23.509 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:23.509 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:23.509 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:23.509 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:23.509 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:23.509 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:23.509 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:23.509 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:23.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.509 --rc genhtml_branch_coverage=1 00:12:23.509 --rc genhtml_function_coverage=1 00:12:23.509 --rc genhtml_legend=1 00:12:23.509 --rc geninfo_all_blocks=1 00:12:23.509 --rc geninfo_unexecuted_blocks=1 00:12:23.509 00:12:23.509 ' 00:12:23.509 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:23.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.509 --rc genhtml_branch_coverage=1 00:12:23.509 --rc genhtml_function_coverage=1 00:12:23.509 --rc genhtml_legend=1 00:12:23.509 --rc geninfo_all_blocks=1 00:12:23.509 --rc geninfo_unexecuted_blocks=1 00:12:23.509 00:12:23.509 ' 00:12:23.509 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:23.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.509 --rc genhtml_branch_coverage=1 00:12:23.509 --rc genhtml_function_coverage=1 00:12:23.509 --rc genhtml_legend=1 00:12:23.509 --rc geninfo_all_blocks=1 00:12:23.509 --rc geninfo_unexecuted_blocks=1 00:12:23.510 00:12:23.510 ' 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:23.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.510 --rc genhtml_branch_coverage=1 00:12:23.510 --rc genhtml_function_coverage=1 00:12:23.510 --rc genhtml_legend=1 00:12:23.510 --rc geninfo_all_blocks=1 00:12:23.510 --rc geninfo_unexecuted_blocks=1 00:12:23.510 00:12:23.510 ' 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # : 0 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:12:23.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@56 -- # have_pci_nics=0 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # xtrace_disable 00:12:23.510 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_devs=() 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_devs 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_net_devs=() 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # pci_drivers=() 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # local -A pci_drivers 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # net_devs=() 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga net_devs 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # e810=() 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga e810 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # x722=() 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga x722 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@323 -- # mlx=() 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@323 -- # local -ga mlx 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:31.662 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:31.662 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:31.662 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.662 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:31.663 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # is_hw=yes 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:12:31.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:12:31.663 00:12:31.663 --- 10.0.0.2 ping statistics --- 00:12:31.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.663 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:12:31.663 00:12:31.663 --- 10.0.0.1 ping statistics --- 00:12:31.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.663 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # return 0 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # nvmfpid=4153188 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # waitforlisten 4153188 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 4153188 ']' 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.663 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:31.663 [2024-12-09 11:47:38.621104] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:12:31.663 [2024-12-09 11:47:38.621175] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.663 [2024-12-09 11:47:38.719017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.663 [2024-12-09 11:47:38.771503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.663 [2024-12-09 11:47:38.771554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.663 [2024-12-09 11:47:38.771563] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.663 [2024-12-09 11:47:38.771570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.663 [2024-12-09 11:47:38.771577] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.663 [2024-12-09 11:47:38.773631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.663 [2024-12-09 11:47:38.773788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.663 [2024-12-09 11:47:38.774053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.663 [2024-12-09 11:47:38.774051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.663 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.663 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:31.663 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:31.663 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:31.663 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.663 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.664 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:31.664 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.664 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.664 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.664 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:31.664 "tick_rate": 2400000000, 00:12:31.664 "poll_groups": [ 00:12:31.664 { 00:12:31.664 "name": "nvmf_tgt_poll_group_000", 00:12:31.664 "admin_qpairs": 0, 00:12:31.664 "io_qpairs": 0, 00:12:31.664 "current_admin_qpairs": 0, 00:12:31.664 "current_io_qpairs": 0, 00:12:31.664 "pending_bdev_io": 0, 00:12:31.664 "completed_nvme_io": 0, 00:12:31.664 "transports": [] 00:12:31.664 }, 00:12:31.664 { 00:12:31.664 "name": "nvmf_tgt_poll_group_001", 00:12:31.664 "admin_qpairs": 0, 00:12:31.664 "io_qpairs": 0, 00:12:31.664 "current_admin_qpairs": 0, 00:12:31.664 "current_io_qpairs": 0, 00:12:31.664 "pending_bdev_io": 0, 00:12:31.664 "completed_nvme_io": 0, 00:12:31.664 "transports": [] 00:12:31.664 }, 00:12:31.664 { 00:12:31.664 "name": "nvmf_tgt_poll_group_002", 00:12:31.664 "admin_qpairs": 0, 00:12:31.664 "io_qpairs": 0, 00:12:31.664 "current_admin_qpairs": 0, 00:12:31.664 "current_io_qpairs": 0, 00:12:31.664 "pending_bdev_io": 0, 00:12:31.664 "completed_nvme_io": 0, 00:12:31.664 "transports": [] 00:12:31.664 }, 00:12:31.664 { 00:12:31.664 "name": "nvmf_tgt_poll_group_003", 00:12:31.664 "admin_qpairs": 0, 00:12:31.664 "io_qpairs": 0, 00:12:31.664 "current_admin_qpairs": 0, 00:12:31.664 "current_io_qpairs": 0, 00:12:31.664 "pending_bdev_io": 0, 00:12:31.664 "completed_nvme_io": 0, 00:12:31.664 "transports": [] 00:12:31.664 } 00:12:31.664 ] 00:12:31.664 }' 00:12:31.664 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:31.664 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:31.664 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:31.664 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:31.664 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:31.664 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.926 [2024-12-09 11:47:39.595279] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:31.926 "tick_rate": 2400000000, 00:12:31.926 "poll_groups": [ 00:12:31.926 { 00:12:31.926 "name": "nvmf_tgt_poll_group_000", 00:12:31.926 "admin_qpairs": 0, 00:12:31.926 "io_qpairs": 0, 00:12:31.926 "current_admin_qpairs": 0, 00:12:31.926 "current_io_qpairs": 0, 00:12:31.926 "pending_bdev_io": 0, 00:12:31.926 "completed_nvme_io": 0, 00:12:31.926 "transports": [ 00:12:31.926 { 00:12:31.926 "trtype": "TCP" 00:12:31.926 } 00:12:31.926 ] 00:12:31.926 }, 00:12:31.926 { 00:12:31.926 "name": "nvmf_tgt_poll_group_001", 00:12:31.926 "admin_qpairs": 0, 00:12:31.926 "io_qpairs": 0, 00:12:31.926 "current_admin_qpairs": 0, 00:12:31.926 "current_io_qpairs": 0, 00:12:31.926 "pending_bdev_io": 0, 00:12:31.926 "completed_nvme_io": 0, 00:12:31.926 "transports": [ 00:12:31.926 { 00:12:31.926 "trtype": "TCP" 00:12:31.926 } 00:12:31.926 ] 00:12:31.926 }, 00:12:31.926 { 00:12:31.926 "name": "nvmf_tgt_poll_group_002", 00:12:31.926 "admin_qpairs": 0, 00:12:31.926 "io_qpairs": 0, 00:12:31.926 "current_admin_qpairs": 0, 00:12:31.926 "current_io_qpairs": 0, 00:12:31.926 "pending_bdev_io": 0, 00:12:31.926 "completed_nvme_io": 0, 00:12:31.926 "transports": [ 00:12:31.926 { 00:12:31.926 "trtype": "TCP" 00:12:31.926 } 00:12:31.926 ] 00:12:31.926 }, 00:12:31.926 { 00:12:31.926 "name": "nvmf_tgt_poll_group_003", 00:12:31.926 "admin_qpairs": 0, 00:12:31.926 "io_qpairs": 0, 00:12:31.926 "current_admin_qpairs": 0, 00:12:31.926 "current_io_qpairs": 0, 00:12:31.926 "pending_bdev_io": 0, 00:12:31.926 "completed_nvme_io": 0, 00:12:31.926 "transports": [ 00:12:31.926 { 00:12:31.926 "trtype": "TCP" 00:12:31.926 } 00:12:31.926 ] 00:12:31.926 } 00:12:31.926 ] 00:12:31.926 }' 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.926 Malloc1 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.926 [2024-12-09 11:47:39.795142] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:31.926 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:32.187 [2024-12-09 11:47:39.832057] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:32.187 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:32.187 could not add new controller: failed to write to nvme-fabrics device 00:12:32.187 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:32.187 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:32.187 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:32.187 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:32.187 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:32.187 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.187 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.187 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.187 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.570 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.570 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:33.570 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.570 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:33.570 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:35.481 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:35.481 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:35.481 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.742 [2024-12-09 11:47:43.568462] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:35.742 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:35.742 could not add new controller: failed to write to nvme-fabrics device 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.742 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.654 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:37.654 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:37.654 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.654 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:37.654 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:39.567 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:39.567 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:39.567 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:39.567 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:39.567 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.567 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.568 [2024-12-09 11:47:47.339183] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.568 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.481 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.481 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:41.481 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.481 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:41.481 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:43.397 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:43.397 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:43.397 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.397 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:43.397 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.397 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:43.397 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.397 [2024-12-09 11:47:51.092756] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.397 11:47:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.311 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:45.311 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:45.311 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.311 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:45.311 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:47.224 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:47.224 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.225 [2024-12-09 11:47:54.839587] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.225 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.610 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.610 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:48.610 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.610 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:48.610 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.153 [2024-12-09 11:47:58.691033] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.153 11:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.536 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:52.536 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:52.537 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.537 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:52.537 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:54.447 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:54.447 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:54.447 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.447 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:54.447 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.447 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:54.447 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:54.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.447 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:54.447 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:54.447 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:54.447 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.447 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:54.447 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.709 [2024-12-09 11:48:02.373734] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.709 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.094 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.094 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:56.094 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.094 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:56.094 11:48:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:58.006 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:58.268 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:58.268 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.268 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:58.268 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.268 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:58.268 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.268 [2024-12-09 11:48:06.101602] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.268 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.530 [2024-12-09 11:48:06.169771] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.530 [2024-12-09 11:48:06.237993] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.530 [2024-12-09 11:48:06.310213] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.530 [2024-12-09 11:48:06.378438] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.530 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:58.792 "tick_rate": 2400000000, 00:12:58.792 "poll_groups": [ 00:12:58.792 { 00:12:58.792 "name": "nvmf_tgt_poll_group_000", 00:12:58.792 "admin_qpairs": 0, 00:12:58.792 "io_qpairs": 224, 00:12:58.792 "current_admin_qpairs": 0, 00:12:58.792 "current_io_qpairs": 0, 00:12:58.792 "pending_bdev_io": 0, 00:12:58.792 "completed_nvme_io": 274, 00:12:58.792 "transports": [ 00:12:58.792 { 00:12:58.792 "trtype": "TCP" 00:12:58.792 } 00:12:58.792 ] 00:12:58.792 }, 00:12:58.792 { 00:12:58.792 "name": "nvmf_tgt_poll_group_001", 00:12:58.792 "admin_qpairs": 1, 00:12:58.792 "io_qpairs": 223, 00:12:58.792 "current_admin_qpairs": 0, 00:12:58.792 "current_io_qpairs": 0, 00:12:58.792 "pending_bdev_io": 0, 00:12:58.792 "completed_nvme_io": 404, 00:12:58.792 "transports": [ 00:12:58.792 { 00:12:58.792 "trtype": "TCP" 00:12:58.792 } 00:12:58.792 ] 00:12:58.792 }, 00:12:58.792 { 00:12:58.792 "name": "nvmf_tgt_poll_group_002", 00:12:58.792 "admin_qpairs": 6, 00:12:58.792 "io_qpairs": 218, 00:12:58.792 "current_admin_qpairs": 0, 00:12:58.792 "current_io_qpairs": 0, 00:12:58.792 "pending_bdev_io": 0, 00:12:58.792 "completed_nvme_io": 221, 00:12:58.792 "transports": [ 00:12:58.792 { 00:12:58.792 "trtype": "TCP" 00:12:58.792 } 00:12:58.792 ] 00:12:58.792 }, 00:12:58.792 { 00:12:58.792 "name": "nvmf_tgt_poll_group_003", 00:12:58.792 "admin_qpairs": 0, 00:12:58.792 "io_qpairs": 224, 00:12:58.792 "current_admin_qpairs": 0, 00:12:58.792 "current_io_qpairs": 0, 00:12:58.792 "pending_bdev_io": 0, 00:12:58.792 "completed_nvme_io": 340, 00:12:58.792 "transports": [ 00:12:58.792 { 00:12:58.792 "trtype": "TCP" 00:12:58.792 } 00:12:58.792 ] 00:12:58.792 } 00:12:58.792 ] 00:12:58.792 }' 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # sync 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # set +e 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # for i in {1..20} 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:12:58.792 rmmod nvme_tcp 00:12:58.792 rmmod nvme_fabrics 00:12:58.792 rmmod nvme_keyring 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # set -e 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@130 -- # return 0 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@513 -- # '[' -n 4153188 ']' 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # killprocess 4153188 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 4153188 ']' 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 4153188 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:58.792 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:58.793 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4153188 00:12:59.053 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:59.053 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:59.053 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4153188' 00:12:59.053 killing process with pid 4153188 00:12:59.053 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 4153188 00:12:59.053 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 4153188 00:12:59.053 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:59.053 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:59.053 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:59.053 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # iptr 00:12:59.053 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-save 00:12:59.053 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:59.053 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-restore 00:12:59.053 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:59.053 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # remove_spdk_ns 00:12:59.053 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.053 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.053 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.601 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:13:01.601 00:13:01.601 real 0m37.955s 00:13:01.601 user 1m54.063s 00:13:01.601 sys 0m7.804s 00:13:01.601 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.601 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.601 ************************************ 00:13:01.601 END TEST nvmf_rpc 00:13:01.601 ************************************ 00:13:01.601 11:48:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:01.601 11:48:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:01.601 11:48:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:01.601 11:48:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:01.601 ************************************ 00:13:01.601 START TEST nvmf_invalid 00:13:01.601 ************************************ 00:13:01.601 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:01.601 * Looking for test storage... 00:13:01.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:01.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.601 --rc genhtml_branch_coverage=1 00:13:01.601 --rc genhtml_function_coverage=1 00:13:01.601 --rc genhtml_legend=1 00:13:01.601 --rc geninfo_all_blocks=1 00:13:01.601 --rc geninfo_unexecuted_blocks=1 00:13:01.601 00:13:01.601 ' 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:01.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.601 --rc genhtml_branch_coverage=1 00:13:01.601 --rc genhtml_function_coverage=1 00:13:01.601 --rc genhtml_legend=1 00:13:01.601 --rc geninfo_all_blocks=1 00:13:01.601 --rc geninfo_unexecuted_blocks=1 00:13:01.601 00:13:01.601 ' 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:01.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.601 --rc genhtml_branch_coverage=1 00:13:01.601 --rc genhtml_function_coverage=1 00:13:01.601 --rc genhtml_legend=1 00:13:01.601 --rc geninfo_all_blocks=1 00:13:01.601 --rc geninfo_unexecuted_blocks=1 00:13:01.601 00:13:01.601 ' 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:01.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.601 --rc genhtml_branch_coverage=1 00:13:01.601 --rc genhtml_function_coverage=1 00:13:01.601 --rc genhtml_legend=1 00:13:01.601 --rc geninfo_all_blocks=1 00:13:01.601 --rc geninfo_unexecuted_blocks=1 00:13:01.601 00:13:01.601 ' 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.601 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # : 0 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:13:01.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@56 -- # have_pci_nics=0 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # xtrace_disable 00:13:01.602 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_devs=() 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_devs 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_net_devs=() 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # pci_drivers=() 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # local -A pci_drivers 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # net_devs=() 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga net_devs 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # e810=() 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga e810 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # x722=() 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga x722 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@323 -- # mlx=() 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@323 -- # local -ga mlx 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:09.741 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:09.741 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:09.741 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:09.741 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # is_hw=yes 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:13:09.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:13:09.741 00:13:09.741 --- 10.0.0.2 ping statistics --- 00:13:09.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.741 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:13:09.741 00:13:09.741 --- 10.0.0.1 ping statistics --- 00:13:09.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.741 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # return 0 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # nvmfpid=4163487 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # waitforlisten 4163487 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 4163487 ']' 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.741 11:48:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:09.741 [2024-12-09 11:48:16.600424] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:13:09.742 [2024-12-09 11:48:16.600494] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.742 [2024-12-09 11:48:16.699662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.742 [2024-12-09 11:48:16.753101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.742 [2024-12-09 11:48:16.753158] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.742 [2024-12-09 11:48:16.753166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.742 [2024-12-09 11:48:16.753174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.742 [2024-12-09 11:48:16.753180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.742 [2024-12-09 11:48:16.755603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.742 [2024-12-09 11:48:16.755755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.742 [2024-12-09 11:48:16.756079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.742 [2024-12-09 11:48:16.756082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.742 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.742 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:09.742 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:09.742 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:09.742 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:09.742 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.742 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:09.742 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode3624 00:13:09.742 [2024-12-09 11:48:17.610591] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:10.002 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:10.002 { 00:13:10.002 "nqn": "nqn.2016-06.io.spdk:cnode3624", 00:13:10.002 "tgt_name": "foobar", 00:13:10.002 "method": "nvmf_create_subsystem", 00:13:10.002 "req_id": 1 00:13:10.002 } 00:13:10.002 Got JSON-RPC error response 00:13:10.002 response: 00:13:10.002 { 00:13:10.002 "code": -32603, 00:13:10.002 "message": "Unable to find target foobar" 00:13:10.002 }' 00:13:10.002 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:10.002 { 00:13:10.002 "nqn": "nqn.2016-06.io.spdk:cnode3624", 00:13:10.002 "tgt_name": "foobar", 00:13:10.002 "method": "nvmf_create_subsystem", 00:13:10.002 "req_id": 1 00:13:10.002 } 00:13:10.002 Got JSON-RPC error response 00:13:10.002 response: 00:13:10.002 { 00:13:10.002 "code": -32603, 00:13:10.002 "message": "Unable to find target foobar" 00:13:10.002 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:10.002 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:10.002 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21863 00:13:10.002 [2024-12-09 11:48:17.799248] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21863: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:10.002 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:10.002 { 00:13:10.002 "nqn": "nqn.2016-06.io.spdk:cnode21863", 00:13:10.002 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:10.002 "method": "nvmf_create_subsystem", 00:13:10.002 "req_id": 1 00:13:10.002 } 00:13:10.002 Got JSON-RPC error response 00:13:10.002 response: 00:13:10.002 { 00:13:10.002 "code": -32602, 00:13:10.002 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:10.002 }' 00:13:10.002 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:10.002 { 00:13:10.002 "nqn": "nqn.2016-06.io.spdk:cnode21863", 00:13:10.002 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:10.002 "method": "nvmf_create_subsystem", 00:13:10.002 "req_id": 1 00:13:10.002 } 00:13:10.002 Got JSON-RPC error response 00:13:10.002 response: 00:13:10.002 { 00:13:10.002 "code": -32602, 00:13:10.002 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:10.002 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:10.002 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:10.002 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode9164 00:13:10.263 [2024-12-09 11:48:17.991840] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9164: invalid model number 'SPDK_Controller' 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:10.263 { 00:13:10.263 "nqn": "nqn.2016-06.io.spdk:cnode9164", 00:13:10.263 "model_number": "SPDK_Controller\u001f", 00:13:10.263 "method": "nvmf_create_subsystem", 00:13:10.263 "req_id": 1 00:13:10.263 } 00:13:10.263 Got JSON-RPC error response 00:13:10.263 response: 00:13:10.263 { 00:13:10.263 "code": -32602, 00:13:10.263 "message": "Invalid MN SPDK_Controller\u001f" 00:13:10.263 }' 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:10.263 { 00:13:10.263 "nqn": "nqn.2016-06.io.spdk:cnode9164", 00:13:10.263 "model_number": "SPDK_Controller\u001f", 00:13:10.263 "method": "nvmf_create_subsystem", 00:13:10.263 "req_id": 1 00:13:10.263 } 00:13:10.263 Got JSON-RPC error response 00:13:10.263 response: 00:13:10.263 { 00:13:10.263 "code": -32602, 00:13:10.263 "message": "Invalid MN SPDK_Controller\u001f" 00:13:10.263 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:10.263 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.264 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ w == \- ]] 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'wpw_i$C*z/\1#89vfv~Ai' 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'wpw_i$C*z/\1#89vfv~Ai' nqn.2016-06.io.spdk:cnode24165 00:13:10.525 [2024-12-09 11:48:18.344980] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24165: invalid serial number 'wpw_i$C*z/\1#89vfv~Ai' 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:10.525 { 00:13:10.525 "nqn": "nqn.2016-06.io.spdk:cnode24165", 00:13:10.525 "serial_number": "wpw_i$C*z/\\1#89vfv~Ai", 00:13:10.525 "method": "nvmf_create_subsystem", 00:13:10.525 "req_id": 1 00:13:10.525 } 00:13:10.525 Got JSON-RPC error response 00:13:10.525 response: 00:13:10.525 { 00:13:10.525 "code": -32602, 00:13:10.525 "message": "Invalid SN wpw_i$C*z/\\1#89vfv~Ai" 00:13:10.525 }' 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:10.525 { 00:13:10.525 "nqn": "nqn.2016-06.io.spdk:cnode24165", 00:13:10.525 "serial_number": "wpw_i$C*z/\\1#89vfv~Ai", 00:13:10.525 "method": "nvmf_create_subsystem", 00:13:10.525 "req_id": 1 00:13:10.525 } 00:13:10.525 Got JSON-RPC error response 00:13:10.525 response: 00:13:10.525 { 00:13:10.525 "code": -32602, 00:13:10.525 "message": "Invalid SN wpw_i$C*z/\\1#89vfv~Ai" 00:13:10.525 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:10.525 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:10.526 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:10.526 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:10.526 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:10.526 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:10.526 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.526 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:10.526 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:10.526 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:10.526 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.526 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.526 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:10.526 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:10.526 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:10.526 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.526 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.526 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:10.526 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:10.526 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:10.526 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.526 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:10.787 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.788 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:10.789 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:10.789 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:10.789 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.789 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.789 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:10.789 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:10.789 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:10.789 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.789 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.789 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:10.789 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:10.789 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:10.789 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.789 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.789 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:10.789 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:10.789 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:10.789 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.789 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ $ == \- ]] 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '$xh%Vq[6PwYOTW(;3DiPdIc$g1P~Z'\''{fb"*&Hr;J' 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '$xh%Vq[6PwYOTW(;3DiPdIc$g1P~Z'\''{fb"*&Hr;J' nqn.2016-06.io.spdk:cnode2459 00:13:11.050 [2024-12-09 11:48:18.854609] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2459: invalid model number '$xh%Vq[6PwYOTW(;3DiPdIc$g1P~Z'{fb"*&Hr;J' 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:11.050 { 00:13:11.050 "nqn": "nqn.2016-06.io.spdk:cnode2459", 00:13:11.050 "model_number": "$xh%Vq[6Pw\u007fYOTW(;3DiPdIc$g1P~Z'\''{fb\"*&Hr;J", 00:13:11.050 "method": "nvmf_create_subsystem", 00:13:11.050 "req_id": 1 00:13:11.050 } 00:13:11.050 Got JSON-RPC error response 00:13:11.050 response: 00:13:11.050 { 00:13:11.050 "code": -32602, 00:13:11.050 "message": "Invalid MN $xh%Vq[6Pw\u007fYOTW(;3DiPdIc$g1P~Z'\''{fb\"*&Hr;J" 00:13:11.050 }' 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:11.050 { 00:13:11.050 "nqn": "nqn.2016-06.io.spdk:cnode2459", 00:13:11.050 "model_number": "$xh%Vq[6Pw\u007fYOTW(;3DiPdIc$g1P~Z'{fb\"*&Hr;J", 00:13:11.050 "method": "nvmf_create_subsystem", 00:13:11.050 "req_id": 1 00:13:11.050 } 00:13:11.050 Got JSON-RPC error response 00:13:11.050 response: 00:13:11.050 { 00:13:11.050 "code": -32602, 00:13:11.050 "message": "Invalid MN $xh%Vq[6Pw\u007fYOTW(;3DiPdIc$g1P~Z'{fb\"*&Hr;J" 00:13:11.050 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:11.050 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:11.310 [2024-12-09 11:48:19.043322] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.310 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:11.570 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:11.570 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:11.570 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:11.570 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:11.570 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:11.570 [2024-12-09 11:48:19.420479] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:11.570 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:11.570 { 00:13:11.570 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:11.570 "listen_address": { 00:13:11.570 "trtype": "tcp", 00:13:11.570 "traddr": "", 00:13:11.570 "trsvcid": "4421" 00:13:11.570 }, 00:13:11.570 "method": "nvmf_subsystem_remove_listener", 00:13:11.570 "req_id": 1 00:13:11.570 } 00:13:11.570 Got JSON-RPC error response 00:13:11.570 response: 00:13:11.570 { 00:13:11.570 "code": -32602, 00:13:11.570 "message": "Invalid parameters" 00:13:11.570 }' 00:13:11.570 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:11.570 { 00:13:11.570 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:11.570 "listen_address": { 00:13:11.570 "trtype": "tcp", 00:13:11.570 "traddr": "", 00:13:11.570 "trsvcid": "4421" 00:13:11.570 }, 00:13:11.570 "method": "nvmf_subsystem_remove_listener", 00:13:11.570 "req_id": 1 00:13:11.570 } 00:13:11.570 Got JSON-RPC error response 00:13:11.570 response: 00:13:11.571 { 00:13:11.571 "code": -32602, 00:13:11.571 "message": "Invalid parameters" 00:13:11.571 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:11.571 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25575 -i 0 00:13:11.831 [2024-12-09 11:48:19.609048] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25575: invalid cntlid range [0-65519] 00:13:11.831 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:11.831 { 00:13:11.831 "nqn": "nqn.2016-06.io.spdk:cnode25575", 00:13:11.831 "min_cntlid": 0, 00:13:11.831 "method": "nvmf_create_subsystem", 00:13:11.831 "req_id": 1 00:13:11.831 } 00:13:11.831 Got JSON-RPC error response 00:13:11.831 response: 00:13:11.831 { 00:13:11.831 "code": -32602, 00:13:11.831 "message": "Invalid cntlid range [0-65519]" 00:13:11.831 }' 00:13:11.831 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:11.831 { 00:13:11.831 "nqn": "nqn.2016-06.io.spdk:cnode25575", 00:13:11.831 "min_cntlid": 0, 00:13:11.831 "method": "nvmf_create_subsystem", 00:13:11.831 "req_id": 1 00:13:11.831 } 00:13:11.831 Got JSON-RPC error response 00:13:11.831 response: 00:13:11.831 { 00:13:11.831 "code": -32602, 00:13:11.831 "message": "Invalid cntlid range [0-65519]" 00:13:11.831 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:11.831 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30261 -i 65520 00:13:12.091 [2024-12-09 11:48:19.781667] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30261: invalid cntlid range [65520-65519] 00:13:12.091 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:12.091 { 00:13:12.091 "nqn": "nqn.2016-06.io.spdk:cnode30261", 00:13:12.091 "min_cntlid": 65520, 00:13:12.091 "method": "nvmf_create_subsystem", 00:13:12.091 "req_id": 1 00:13:12.091 } 00:13:12.091 Got JSON-RPC error response 00:13:12.091 response: 00:13:12.091 { 00:13:12.091 "code": -32602, 00:13:12.091 "message": "Invalid cntlid range [65520-65519]" 00:13:12.091 }' 00:13:12.091 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:12.091 { 00:13:12.091 "nqn": "nqn.2016-06.io.spdk:cnode30261", 00:13:12.091 "min_cntlid": 65520, 00:13:12.091 "method": "nvmf_create_subsystem", 00:13:12.091 "req_id": 1 00:13:12.091 } 00:13:12.091 Got JSON-RPC error response 00:13:12.091 response: 00:13:12.091 { 00:13:12.091 "code": -32602, 00:13:12.091 "message": "Invalid cntlid range [65520-65519]" 00:13:12.091 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:12.091 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15124 -I 0 00:13:12.091 [2024-12-09 11:48:19.970223] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15124: invalid cntlid range [1-0] 00:13:12.352 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:12.352 { 00:13:12.352 "nqn": "nqn.2016-06.io.spdk:cnode15124", 00:13:12.352 "max_cntlid": 0, 00:13:12.352 "method": "nvmf_create_subsystem", 00:13:12.352 "req_id": 1 00:13:12.352 } 00:13:12.352 Got JSON-RPC error response 00:13:12.352 response: 00:13:12.352 { 00:13:12.352 "code": -32602, 00:13:12.352 "message": "Invalid cntlid range [1-0]" 00:13:12.352 }' 00:13:12.352 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:12.352 { 00:13:12.352 "nqn": "nqn.2016-06.io.spdk:cnode15124", 00:13:12.352 "max_cntlid": 0, 00:13:12.352 "method": "nvmf_create_subsystem", 00:13:12.352 "req_id": 1 00:13:12.352 } 00:13:12.352 Got JSON-RPC error response 00:13:12.352 response: 00:13:12.352 { 00:13:12.352 "code": -32602, 00:13:12.352 "message": "Invalid cntlid range [1-0]" 00:13:12.352 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:12.352 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20214 -I 65520 00:13:12.352 [2024-12-09 11:48:20.158824] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20214: invalid cntlid range [1-65520] 00:13:12.352 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:12.352 { 00:13:12.352 "nqn": "nqn.2016-06.io.spdk:cnode20214", 00:13:12.352 "max_cntlid": 65520, 00:13:12.352 "method": "nvmf_create_subsystem", 00:13:12.352 "req_id": 1 00:13:12.352 } 00:13:12.352 Got JSON-RPC error response 00:13:12.352 response: 00:13:12.352 { 00:13:12.352 "code": -32602, 00:13:12.352 "message": "Invalid cntlid range [1-65520]" 00:13:12.352 }' 00:13:12.352 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:12.352 { 00:13:12.352 "nqn": "nqn.2016-06.io.spdk:cnode20214", 00:13:12.352 "max_cntlid": 65520, 00:13:12.352 "method": "nvmf_create_subsystem", 00:13:12.352 "req_id": 1 00:13:12.352 } 00:13:12.352 Got JSON-RPC error response 00:13:12.352 response: 00:13:12.352 { 00:13:12.352 "code": -32602, 00:13:12.352 "message": "Invalid cntlid range [1-65520]" 00:13:12.352 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:12.352 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3067 -i 6 -I 5 00:13:12.612 [2024-12-09 11:48:20.351431] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3067: invalid cntlid range [6-5] 00:13:12.612 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:12.612 { 00:13:12.612 "nqn": "nqn.2016-06.io.spdk:cnode3067", 00:13:12.612 "min_cntlid": 6, 00:13:12.612 "max_cntlid": 5, 00:13:12.612 "method": "nvmf_create_subsystem", 00:13:12.612 "req_id": 1 00:13:12.612 } 00:13:12.612 Got JSON-RPC error response 00:13:12.612 response: 00:13:12.612 { 00:13:12.612 "code": -32602, 00:13:12.612 "message": "Invalid cntlid range [6-5]" 00:13:12.612 }' 00:13:12.612 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:12.612 { 00:13:12.612 "nqn": "nqn.2016-06.io.spdk:cnode3067", 00:13:12.612 "min_cntlid": 6, 00:13:12.612 "max_cntlid": 5, 00:13:12.612 "method": "nvmf_create_subsystem", 00:13:12.612 "req_id": 1 00:13:12.612 } 00:13:12.612 Got JSON-RPC error response 00:13:12.612 response: 00:13:12.612 { 00:13:12.612 "code": -32602, 00:13:12.612 "message": "Invalid cntlid range [6-5]" 00:13:12.612 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:12.612 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:12.612 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:12.612 { 00:13:12.612 "name": "foobar", 00:13:12.612 "method": "nvmf_delete_target", 00:13:12.612 "req_id": 1 00:13:12.612 } 00:13:12.612 Got JSON-RPC error response 00:13:12.612 response: 00:13:12.612 { 00:13:12.612 "code": -32602, 00:13:12.612 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:12.612 }' 00:13:12.612 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:12.612 { 00:13:12.612 "name": "foobar", 00:13:12.612 "method": "nvmf_delete_target", 00:13:12.612 "req_id": 1 00:13:12.612 } 00:13:12.612 Got JSON-RPC error response 00:13:12.612 response: 00:13:12.612 { 00:13:12.612 "code": -32602, 00:13:12.612 "message": "The specified target doesn't exist, cannot delete it." 00:13:12.612 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:12.612 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:12.612 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:12.612 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:12.612 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # sync 00:13:12.612 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:13:12.612 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # set +e 00:13:12.612 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # for i in {1..20} 00:13:12.612 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:13:12.873 rmmod nvme_tcp 00:13:12.873 rmmod nvme_fabrics 00:13:12.873 rmmod nvme_keyring 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # set -e 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@130 -- # return 0 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@513 -- # '[' -n 4163487 ']' 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # killprocess 4163487 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 4163487 ']' 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 4163487 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4163487 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4163487' 00:13:12.873 killing process with pid 4163487 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 4163487 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 4163487 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # iptr 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-save 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-restore 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # remove_spdk_ns 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:12.873 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.420 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:13:15.420 00:13:15.420 real 0m13.847s 00:13:15.420 user 0m20.524s 00:13:15.420 sys 0m6.473s 00:13:15.420 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.420 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:15.420 ************************************ 00:13:15.420 END TEST nvmf_invalid 00:13:15.420 ************************************ 00:13:15.420 11:48:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:15.420 11:48:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:15.420 11:48:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.420 11:48:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:15.420 ************************************ 00:13:15.420 START TEST nvmf_connect_stress 00:13:15.420 ************************************ 00:13:15.420 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:15.420 * Looking for test storage... 00:13:15.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:15.420 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:15.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.420 --rc genhtml_branch_coverage=1 00:13:15.420 --rc genhtml_function_coverage=1 00:13:15.420 --rc genhtml_legend=1 00:13:15.420 --rc geninfo_all_blocks=1 00:13:15.421 --rc geninfo_unexecuted_blocks=1 00:13:15.421 00:13:15.421 ' 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:15.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.421 --rc genhtml_branch_coverage=1 00:13:15.421 --rc genhtml_function_coverage=1 00:13:15.421 --rc genhtml_legend=1 00:13:15.421 --rc geninfo_all_blocks=1 00:13:15.421 --rc geninfo_unexecuted_blocks=1 00:13:15.421 00:13:15.421 ' 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:15.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.421 --rc genhtml_branch_coverage=1 00:13:15.421 --rc genhtml_function_coverage=1 00:13:15.421 --rc genhtml_legend=1 00:13:15.421 --rc geninfo_all_blocks=1 00:13:15.421 --rc geninfo_unexecuted_blocks=1 00:13:15.421 00:13:15.421 ' 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:15.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.421 --rc genhtml_branch_coverage=1 00:13:15.421 --rc genhtml_function_coverage=1 00:13:15.421 --rc genhtml_legend=1 00:13:15.421 --rc geninfo_all_blocks=1 00:13:15.421 --rc geninfo_unexecuted_blocks=1 00:13:15.421 00:13:15.421 ' 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # : 0 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:13:15.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@56 -- # have_pci_nics=0 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # xtrace_disable 00:13:15.421 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.568 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:23.568 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_devs=() 00:13:23.568 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_devs 00:13:23.568 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_net_devs=() 00:13:23.568 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:13:23.568 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # pci_drivers=() 00:13:23.568 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # local -A pci_drivers 00:13:23.568 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # net_devs=() 00:13:23.568 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga net_devs 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # e810=() 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga e810 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # x722=() 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga x722 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@323 -- # mlx=() 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@323 -- # local -ga mlx 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:23.569 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:23.569 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:23.569 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:23.569 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:13:23.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:23.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:13:23.569 00:13:23.569 --- 10.0.0.2 ping statistics --- 00:13:23.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.569 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:23.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:23.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:13:23.569 00:13:23.569 --- 10.0.0.1 ping statistics --- 00:13:23.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.569 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # return 0 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:23.569 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:23.570 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:23.570 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:23.570 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.570 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # nvmfpid=4168648 00:13:23.570 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # waitforlisten 4168648 00:13:23.570 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:23.570 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 4168648 ']' 00:13:23.570 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.570 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:23.570 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.570 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:23.570 11:48:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.570 [2024-12-09 11:48:30.509553] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:13:23.570 [2024-12-09 11:48:30.509603] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.570 [2024-12-09 11:48:30.600612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:23.570 [2024-12-09 11:48:30.631810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.570 [2024-12-09 11:48:30.631843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.570 [2024-12-09 11:48:30.631851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.570 [2024-12-09 11:48:30.631856] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.570 [2024-12-09 11:48:30.631861] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.570 [2024-12-09 11:48:30.633127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.570 [2024-12-09 11:48:30.633279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.570 [2024-12-09 11:48:30.633282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.570 [2024-12-09 11:48:31.354894] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.570 [2024-12-09 11:48:31.379103] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.570 NULL1 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=4168823 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.570 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.830 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.090 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.090 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:24.090 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.090 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.090 11:48:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.351 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.351 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:24.351 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.351 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.351 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.611 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.611 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:24.611 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.611 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.611 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.181 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.181 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:25.181 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.181 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.181 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.441 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.441 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:25.441 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.441 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.441 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.702 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.702 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:25.702 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.702 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.702 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.962 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.962 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:25.962 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.962 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.962 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.223 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.223 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:26.223 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.223 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.223 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.794 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.794 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:26.794 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.794 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.794 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.054 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.054 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:27.054 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.054 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.054 11:48:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.316 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.316 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:27.316 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.316 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.316 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.576 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.576 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:27.576 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.576 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.576 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.148 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.148 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:28.148 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.148 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.148 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.409 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.409 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:28.409 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.409 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.409 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.671 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.671 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:28.671 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.671 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.671 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.931 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.931 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:28.931 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.931 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.931 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.192 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.192 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:29.192 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.192 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.192 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.766 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.766 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:29.766 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.766 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.766 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.027 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.027 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:30.027 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.027 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.027 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.287 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.287 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:30.287 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.287 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.287 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.548 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.548 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:30.548 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.548 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.548 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.809 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.809 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:30.809 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.809 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.809 11:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.380 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.380 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:31.380 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.380 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.380 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.641 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.641 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:31.641 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.641 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.641 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.900 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.900 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:31.900 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.900 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.900 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.160 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.160 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:32.160 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.160 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.160 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.420 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.420 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:32.420 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.420 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.420 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.990 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.990 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:32.990 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.990 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.990 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.250 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.250 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:33.250 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.250 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.250 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.511 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.511 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:33.511 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.511 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.511 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.771 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:33.771 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.771 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4168823 00:13:33.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (4168823) - No such process 00:13:33.771 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 4168823 00:13:33.771 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:33.771 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:33.771 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:33.771 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:33.771 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # sync 00:13:33.771 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:13:33.771 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # set +e 00:13:33.771 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # for i in {1..20} 00:13:33.771 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:13:33.771 rmmod nvme_tcp 00:13:33.771 rmmod nvme_fabrics 00:13:33.771 rmmod nvme_keyring 00:13:34.031 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:13:34.031 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # set -e 00:13:34.031 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@130 -- # return 0 00:13:34.031 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@513 -- # '[' -n 4168648 ']' 00:13:34.031 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # killprocess 4168648 00:13:34.031 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 4168648 ']' 00:13:34.031 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 4168648 00:13:34.031 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:34.031 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:34.031 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4168648 00:13:34.031 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:34.031 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:34.031 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4168648' 00:13:34.031 killing process with pid 4168648 00:13:34.031 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 4168648 00:13:34.031 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 4168648 00:13:34.031 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:34.031 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:34.031 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:34.031 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # iptr 00:13:34.032 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-save 00:13:34.032 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-restore 00:13:34.032 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:34.032 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:34.032 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # remove_spdk_ns 00:13:34.032 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.032 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:34.032 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.574 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:13:36.574 00:13:36.574 real 0m21.019s 00:13:36.574 user 0m42.288s 00:13:36.574 sys 0m9.054s 00:13:36.574 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.575 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.575 ************************************ 00:13:36.575 END TEST nvmf_connect_stress 00:13:36.575 ************************************ 00:13:36.575 11:48:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:36.575 11:48:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:36.575 11:48:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.575 11:48:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:36.575 ************************************ 00:13:36.575 START TEST nvmf_fused_ordering 00:13:36.575 ************************************ 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:36.575 * Looking for test storage... 00:13:36.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:36.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.575 --rc genhtml_branch_coverage=1 00:13:36.575 --rc genhtml_function_coverage=1 00:13:36.575 --rc genhtml_legend=1 00:13:36.575 --rc geninfo_all_blocks=1 00:13:36.575 --rc geninfo_unexecuted_blocks=1 00:13:36.575 00:13:36.575 ' 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:36.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.575 --rc genhtml_branch_coverage=1 00:13:36.575 --rc genhtml_function_coverage=1 00:13:36.575 --rc genhtml_legend=1 00:13:36.575 --rc geninfo_all_blocks=1 00:13:36.575 --rc geninfo_unexecuted_blocks=1 00:13:36.575 00:13:36.575 ' 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:36.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.575 --rc genhtml_branch_coverage=1 00:13:36.575 --rc genhtml_function_coverage=1 00:13:36.575 --rc genhtml_legend=1 00:13:36.575 --rc geninfo_all_blocks=1 00:13:36.575 --rc geninfo_unexecuted_blocks=1 00:13:36.575 00:13:36.575 ' 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:36.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.575 --rc genhtml_branch_coverage=1 00:13:36.575 --rc genhtml_function_coverage=1 00:13:36.575 --rc genhtml_legend=1 00:13:36.575 --rc geninfo_all_blocks=1 00:13:36.575 --rc geninfo_unexecuted_blocks=1 00:13:36.575 00:13:36.575 ' 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # : 0 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:13:36.575 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.576 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.576 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:13:36.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:13:36.576 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:13:36.576 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:13:36.576 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@56 -- # have_pci_nics=0 00:13:36.576 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:36.576 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:36.576 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.576 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:36.576 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:36.576 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:36.576 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.576 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.576 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.576 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:13:36.576 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:13:36.576 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # xtrace_disable 00:13:36.576 11:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_devs=() 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_devs 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_net_devs=() 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # pci_drivers=() 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # local -A pci_drivers 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # net_devs=() 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga net_devs 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # e810=() 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga e810 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # x722=() 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga x722 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@323 -- # mlx=() 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@323 -- # local -ga mlx 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:44.719 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:44.719 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:13:44.719 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:44.720 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:44.720 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # is_hw=yes 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:13:44.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:13:44.720 00:13:44.720 --- 10.0.0.2 ping statistics --- 00:13:44.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.720 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:44.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:13:44.720 00:13:44.720 --- 10.0.0.1 ping statistics --- 00:13:44.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.720 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # return 0 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # nvmfpid=4175109 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # waitforlisten 4175109 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 4175109 ']' 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.720 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:44.720 [2024-12-09 11:48:51.837543] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:13:44.720 [2024-12-09 11:48:51.837607] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.720 [2024-12-09 11:48:51.940245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.720 [2024-12-09 11:48:51.990032] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.720 [2024-12-09 11:48:51.990086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.720 [2024-12-09 11:48:51.990094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.720 [2024-12-09 11:48:51.990102] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.720 [2024-12-09 11:48:51.990108] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.720 [2024-12-09 11:48:51.990881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:44.981 [2024-12-09 11:48:52.698038] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:44.981 [2024-12-09 11:48:52.714302] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:44.981 NULL1 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.981 11:48:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:44.981 [2024-12-09 11:48:52.771361] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:13:44.981 [2024-12-09 11:48:52.771409] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4175220 ] 00:13:45.553 Attached to nqn.2016-06.io.spdk:cnode1 00:13:45.553 Namespace ID: 1 size: 1GB 00:13:45.553 fused_ordering(0) 00:13:45.553 fused_ordering(1) 00:13:45.553 fused_ordering(2) 00:13:45.553 fused_ordering(3) 00:13:45.553 fused_ordering(4) 00:13:45.553 fused_ordering(5) 00:13:45.553 fused_ordering(6) 00:13:45.553 fused_ordering(7) 00:13:45.553 fused_ordering(8) 00:13:45.553 fused_ordering(9) 00:13:45.553 fused_ordering(10) 00:13:45.553 fused_ordering(11) 00:13:45.553 fused_ordering(12) 00:13:45.553 fused_ordering(13) 00:13:45.553 fused_ordering(14) 00:13:45.553 fused_ordering(15) 00:13:45.553 fused_ordering(16) 00:13:45.553 fused_ordering(17) 00:13:45.553 fused_ordering(18) 00:13:45.553 fused_ordering(19) 00:13:45.553 fused_ordering(20) 00:13:45.553 fused_ordering(21) 00:13:45.553 fused_ordering(22) 00:13:45.553 fused_ordering(23) 00:13:45.553 fused_ordering(24) 00:13:45.553 fused_ordering(25) 00:13:45.553 fused_ordering(26) 00:13:45.553 fused_ordering(27) 00:13:45.553 fused_ordering(28) 00:13:45.553 fused_ordering(29) 00:13:45.553 fused_ordering(30) 00:13:45.553 fused_ordering(31) 00:13:45.553 fused_ordering(32) 00:13:45.553 fused_ordering(33) 00:13:45.553 fused_ordering(34) 00:13:45.553 fused_ordering(35) 00:13:45.553 fused_ordering(36) 00:13:45.553 fused_ordering(37) 00:13:45.553 fused_ordering(38) 00:13:45.553 fused_ordering(39) 00:13:45.553 fused_ordering(40) 00:13:45.553 fused_ordering(41) 00:13:45.553 fused_ordering(42) 00:13:45.553 fused_ordering(43) 00:13:45.553 fused_ordering(44) 00:13:45.553 fused_ordering(45) 00:13:45.553 fused_ordering(46) 00:13:45.553 fused_ordering(47) 00:13:45.553 fused_ordering(48) 00:13:45.553 fused_ordering(49) 00:13:45.553 fused_ordering(50) 00:13:45.553 fused_ordering(51) 00:13:45.553 fused_ordering(52) 00:13:45.553 fused_ordering(53) 00:13:45.553 fused_ordering(54) 00:13:45.553 fused_ordering(55) 00:13:45.553 fused_ordering(56) 00:13:45.553 fused_ordering(57) 00:13:45.553 fused_ordering(58) 00:13:45.553 fused_ordering(59) 00:13:45.553 fused_ordering(60) 00:13:45.553 fused_ordering(61) 00:13:45.553 fused_ordering(62) 00:13:45.553 fused_ordering(63) 00:13:45.553 fused_ordering(64) 00:13:45.553 fused_ordering(65) 00:13:45.553 fused_ordering(66) 00:13:45.553 fused_ordering(67) 00:13:45.553 fused_ordering(68) 00:13:45.553 fused_ordering(69) 00:13:45.553 fused_ordering(70) 00:13:45.553 fused_ordering(71) 00:13:45.553 fused_ordering(72) 00:13:45.553 fused_ordering(73) 00:13:45.553 fused_ordering(74) 00:13:45.553 fused_ordering(75) 00:13:45.553 fused_ordering(76) 00:13:45.553 fused_ordering(77) 00:13:45.553 fused_ordering(78) 00:13:45.553 fused_ordering(79) 00:13:45.553 fused_ordering(80) 00:13:45.553 fused_ordering(81) 00:13:45.553 fused_ordering(82) 00:13:45.553 fused_ordering(83) 00:13:45.553 fused_ordering(84) 00:13:45.553 fused_ordering(85) 00:13:45.553 fused_ordering(86) 00:13:45.553 fused_ordering(87) 00:13:45.553 fused_ordering(88) 00:13:45.553 fused_ordering(89) 00:13:45.553 fused_ordering(90) 00:13:45.553 fused_ordering(91) 00:13:45.553 fused_ordering(92) 00:13:45.553 fused_ordering(93) 00:13:45.553 fused_ordering(94) 00:13:45.553 fused_ordering(95) 00:13:45.553 fused_ordering(96) 00:13:45.553 fused_ordering(97) 00:13:45.553 fused_ordering(98) 00:13:45.553 fused_ordering(99) 00:13:45.553 fused_ordering(100) 00:13:45.553 fused_ordering(101) 00:13:45.553 fused_ordering(102) 00:13:45.553 fused_ordering(103) 00:13:45.553 fused_ordering(104) 00:13:45.553 fused_ordering(105) 00:13:45.553 fused_ordering(106) 00:13:45.553 fused_ordering(107) 00:13:45.553 fused_ordering(108) 00:13:45.553 fused_ordering(109) 00:13:45.553 fused_ordering(110) 00:13:45.553 fused_ordering(111) 00:13:45.553 fused_ordering(112) 00:13:45.553 fused_ordering(113) 00:13:45.553 fused_ordering(114) 00:13:45.553 fused_ordering(115) 00:13:45.553 fused_ordering(116) 00:13:45.553 fused_ordering(117) 00:13:45.553 fused_ordering(118) 00:13:45.553 fused_ordering(119) 00:13:45.553 fused_ordering(120) 00:13:45.553 fused_ordering(121) 00:13:45.553 fused_ordering(122) 00:13:45.553 fused_ordering(123) 00:13:45.553 fused_ordering(124) 00:13:45.553 fused_ordering(125) 00:13:45.553 fused_ordering(126) 00:13:45.553 fused_ordering(127) 00:13:45.553 fused_ordering(128) 00:13:45.553 fused_ordering(129) 00:13:45.553 fused_ordering(130) 00:13:45.553 fused_ordering(131) 00:13:45.553 fused_ordering(132) 00:13:45.553 fused_ordering(133) 00:13:45.553 fused_ordering(134) 00:13:45.553 fused_ordering(135) 00:13:45.553 fused_ordering(136) 00:13:45.553 fused_ordering(137) 00:13:45.553 fused_ordering(138) 00:13:45.553 fused_ordering(139) 00:13:45.553 fused_ordering(140) 00:13:45.553 fused_ordering(141) 00:13:45.553 fused_ordering(142) 00:13:45.553 fused_ordering(143) 00:13:45.553 fused_ordering(144) 00:13:45.553 fused_ordering(145) 00:13:45.553 fused_ordering(146) 00:13:45.553 fused_ordering(147) 00:13:45.553 fused_ordering(148) 00:13:45.553 fused_ordering(149) 00:13:45.553 fused_ordering(150) 00:13:45.553 fused_ordering(151) 00:13:45.553 fused_ordering(152) 00:13:45.553 fused_ordering(153) 00:13:45.553 fused_ordering(154) 00:13:45.553 fused_ordering(155) 00:13:45.553 fused_ordering(156) 00:13:45.553 fused_ordering(157) 00:13:45.553 fused_ordering(158) 00:13:45.553 fused_ordering(159) 00:13:45.553 fused_ordering(160) 00:13:45.553 fused_ordering(161) 00:13:45.553 fused_ordering(162) 00:13:45.553 fused_ordering(163) 00:13:45.553 fused_ordering(164) 00:13:45.553 fused_ordering(165) 00:13:45.553 fused_ordering(166) 00:13:45.553 fused_ordering(167) 00:13:45.553 fused_ordering(168) 00:13:45.553 fused_ordering(169) 00:13:45.553 fused_ordering(170) 00:13:45.553 fused_ordering(171) 00:13:45.553 fused_ordering(172) 00:13:45.553 fused_ordering(173) 00:13:45.553 fused_ordering(174) 00:13:45.553 fused_ordering(175) 00:13:45.553 fused_ordering(176) 00:13:45.553 fused_ordering(177) 00:13:45.553 fused_ordering(178) 00:13:45.553 fused_ordering(179) 00:13:45.553 fused_ordering(180) 00:13:45.553 fused_ordering(181) 00:13:45.553 fused_ordering(182) 00:13:45.553 fused_ordering(183) 00:13:45.553 fused_ordering(184) 00:13:45.553 fused_ordering(185) 00:13:45.553 fused_ordering(186) 00:13:45.553 fused_ordering(187) 00:13:45.553 fused_ordering(188) 00:13:45.553 fused_ordering(189) 00:13:45.553 fused_ordering(190) 00:13:45.553 fused_ordering(191) 00:13:45.553 fused_ordering(192) 00:13:45.553 fused_ordering(193) 00:13:45.553 fused_ordering(194) 00:13:45.553 fused_ordering(195) 00:13:45.553 fused_ordering(196) 00:13:45.553 fused_ordering(197) 00:13:45.553 fused_ordering(198) 00:13:45.553 fused_ordering(199) 00:13:45.553 fused_ordering(200) 00:13:45.553 fused_ordering(201) 00:13:45.553 fused_ordering(202) 00:13:45.553 fused_ordering(203) 00:13:45.553 fused_ordering(204) 00:13:45.553 fused_ordering(205) 00:13:45.815 fused_ordering(206) 00:13:45.815 fused_ordering(207) 00:13:45.815 fused_ordering(208) 00:13:45.815 fused_ordering(209) 00:13:45.815 fused_ordering(210) 00:13:45.815 fused_ordering(211) 00:13:45.815 fused_ordering(212) 00:13:45.815 fused_ordering(213) 00:13:45.815 fused_ordering(214) 00:13:45.815 fused_ordering(215) 00:13:45.815 fused_ordering(216) 00:13:45.815 fused_ordering(217) 00:13:45.815 fused_ordering(218) 00:13:45.815 fused_ordering(219) 00:13:45.815 fused_ordering(220) 00:13:45.815 fused_ordering(221) 00:13:45.815 fused_ordering(222) 00:13:45.815 fused_ordering(223) 00:13:45.815 fused_ordering(224) 00:13:45.815 fused_ordering(225) 00:13:45.816 fused_ordering(226) 00:13:45.816 fused_ordering(227) 00:13:45.816 fused_ordering(228) 00:13:45.816 fused_ordering(229) 00:13:45.816 fused_ordering(230) 00:13:45.816 fused_ordering(231) 00:13:45.816 fused_ordering(232) 00:13:45.816 fused_ordering(233) 00:13:45.816 fused_ordering(234) 00:13:45.816 fused_ordering(235) 00:13:45.816 fused_ordering(236) 00:13:45.816 fused_ordering(237) 00:13:45.816 fused_ordering(238) 00:13:45.816 fused_ordering(239) 00:13:45.816 fused_ordering(240) 00:13:45.816 fused_ordering(241) 00:13:45.816 fused_ordering(242) 00:13:45.816 fused_ordering(243) 00:13:45.816 fused_ordering(244) 00:13:45.816 fused_ordering(245) 00:13:45.816 fused_ordering(246) 00:13:45.816 fused_ordering(247) 00:13:45.816 fused_ordering(248) 00:13:45.816 fused_ordering(249) 00:13:45.816 fused_ordering(250) 00:13:45.816 fused_ordering(251) 00:13:45.816 fused_ordering(252) 00:13:45.816 fused_ordering(253) 00:13:45.816 fused_ordering(254) 00:13:45.816 fused_ordering(255) 00:13:45.816 fused_ordering(256) 00:13:45.816 fused_ordering(257) 00:13:45.816 fused_ordering(258) 00:13:45.816 fused_ordering(259) 00:13:45.816 fused_ordering(260) 00:13:45.816 fused_ordering(261) 00:13:45.816 fused_ordering(262) 00:13:45.816 fused_ordering(263) 00:13:45.816 fused_ordering(264) 00:13:45.816 fused_ordering(265) 00:13:45.816 fused_ordering(266) 00:13:45.816 fused_ordering(267) 00:13:45.816 fused_ordering(268) 00:13:45.816 fused_ordering(269) 00:13:45.816 fused_ordering(270) 00:13:45.816 fused_ordering(271) 00:13:45.816 fused_ordering(272) 00:13:45.816 fused_ordering(273) 00:13:45.816 fused_ordering(274) 00:13:45.816 fused_ordering(275) 00:13:45.816 fused_ordering(276) 00:13:45.816 fused_ordering(277) 00:13:45.816 fused_ordering(278) 00:13:45.816 fused_ordering(279) 00:13:45.816 fused_ordering(280) 00:13:45.816 fused_ordering(281) 00:13:45.816 fused_ordering(282) 00:13:45.816 fused_ordering(283) 00:13:45.816 fused_ordering(284) 00:13:45.816 fused_ordering(285) 00:13:45.816 fused_ordering(286) 00:13:45.816 fused_ordering(287) 00:13:45.816 fused_ordering(288) 00:13:45.816 fused_ordering(289) 00:13:45.816 fused_ordering(290) 00:13:45.816 fused_ordering(291) 00:13:45.816 fused_ordering(292) 00:13:45.816 fused_ordering(293) 00:13:45.816 fused_ordering(294) 00:13:45.816 fused_ordering(295) 00:13:45.816 fused_ordering(296) 00:13:45.816 fused_ordering(297) 00:13:45.816 fused_ordering(298) 00:13:45.816 fused_ordering(299) 00:13:45.816 fused_ordering(300) 00:13:45.816 fused_ordering(301) 00:13:45.816 fused_ordering(302) 00:13:45.816 fused_ordering(303) 00:13:45.816 fused_ordering(304) 00:13:45.816 fused_ordering(305) 00:13:45.816 fused_ordering(306) 00:13:45.816 fused_ordering(307) 00:13:45.816 fused_ordering(308) 00:13:45.816 fused_ordering(309) 00:13:45.816 fused_ordering(310) 00:13:45.816 fused_ordering(311) 00:13:45.816 fused_ordering(312) 00:13:45.816 fused_ordering(313) 00:13:45.816 fused_ordering(314) 00:13:45.816 fused_ordering(315) 00:13:45.816 fused_ordering(316) 00:13:45.816 fused_ordering(317) 00:13:45.816 fused_ordering(318) 00:13:45.816 fused_ordering(319) 00:13:45.816 fused_ordering(320) 00:13:45.816 fused_ordering(321) 00:13:45.816 fused_ordering(322) 00:13:45.816 fused_ordering(323) 00:13:45.816 fused_ordering(324) 00:13:45.816 fused_ordering(325) 00:13:45.816 fused_ordering(326) 00:13:45.816 fused_ordering(327) 00:13:45.816 fused_ordering(328) 00:13:45.816 fused_ordering(329) 00:13:45.816 fused_ordering(330) 00:13:45.816 fused_ordering(331) 00:13:45.816 fused_ordering(332) 00:13:45.816 fused_ordering(333) 00:13:45.816 fused_ordering(334) 00:13:45.816 fused_ordering(335) 00:13:45.816 fused_ordering(336) 00:13:45.816 fused_ordering(337) 00:13:45.816 fused_ordering(338) 00:13:45.816 fused_ordering(339) 00:13:45.816 fused_ordering(340) 00:13:45.816 fused_ordering(341) 00:13:45.816 fused_ordering(342) 00:13:45.816 fused_ordering(343) 00:13:45.816 fused_ordering(344) 00:13:45.816 fused_ordering(345) 00:13:45.816 fused_ordering(346) 00:13:45.816 fused_ordering(347) 00:13:45.816 fused_ordering(348) 00:13:45.816 fused_ordering(349) 00:13:45.816 fused_ordering(350) 00:13:45.816 fused_ordering(351) 00:13:45.816 fused_ordering(352) 00:13:45.816 fused_ordering(353) 00:13:45.816 fused_ordering(354) 00:13:45.816 fused_ordering(355) 00:13:45.816 fused_ordering(356) 00:13:45.816 fused_ordering(357) 00:13:45.816 fused_ordering(358) 00:13:45.816 fused_ordering(359) 00:13:45.816 fused_ordering(360) 00:13:45.816 fused_ordering(361) 00:13:45.816 fused_ordering(362) 00:13:45.816 fused_ordering(363) 00:13:45.816 fused_ordering(364) 00:13:45.816 fused_ordering(365) 00:13:45.816 fused_ordering(366) 00:13:45.816 fused_ordering(367) 00:13:45.816 fused_ordering(368) 00:13:45.816 fused_ordering(369) 00:13:45.816 fused_ordering(370) 00:13:45.816 fused_ordering(371) 00:13:45.816 fused_ordering(372) 00:13:45.816 fused_ordering(373) 00:13:45.816 fused_ordering(374) 00:13:45.816 fused_ordering(375) 00:13:45.816 fused_ordering(376) 00:13:45.816 fused_ordering(377) 00:13:45.816 fused_ordering(378) 00:13:45.816 fused_ordering(379) 00:13:45.816 fused_ordering(380) 00:13:45.816 fused_ordering(381) 00:13:45.816 fused_ordering(382) 00:13:45.816 fused_ordering(383) 00:13:45.816 fused_ordering(384) 00:13:45.816 fused_ordering(385) 00:13:45.816 fused_ordering(386) 00:13:45.816 fused_ordering(387) 00:13:45.816 fused_ordering(388) 00:13:45.816 fused_ordering(389) 00:13:45.816 fused_ordering(390) 00:13:45.816 fused_ordering(391) 00:13:45.816 fused_ordering(392) 00:13:45.816 fused_ordering(393) 00:13:45.816 fused_ordering(394) 00:13:45.816 fused_ordering(395) 00:13:45.816 fused_ordering(396) 00:13:45.816 fused_ordering(397) 00:13:45.816 fused_ordering(398) 00:13:45.816 fused_ordering(399) 00:13:45.816 fused_ordering(400) 00:13:45.816 fused_ordering(401) 00:13:45.816 fused_ordering(402) 00:13:45.816 fused_ordering(403) 00:13:45.816 fused_ordering(404) 00:13:45.816 fused_ordering(405) 00:13:45.816 fused_ordering(406) 00:13:45.816 fused_ordering(407) 00:13:45.816 fused_ordering(408) 00:13:45.816 fused_ordering(409) 00:13:45.816 fused_ordering(410) 00:13:46.387 fused_ordering(411) 00:13:46.387 fused_ordering(412) 00:13:46.387 fused_ordering(413) 00:13:46.387 fused_ordering(414) 00:13:46.387 fused_ordering(415) 00:13:46.388 fused_ordering(416) 00:13:46.388 fused_ordering(417) 00:13:46.388 fused_ordering(418) 00:13:46.388 fused_ordering(419) 00:13:46.388 fused_ordering(420) 00:13:46.388 fused_ordering(421) 00:13:46.388 fused_ordering(422) 00:13:46.388 fused_ordering(423) 00:13:46.388 fused_ordering(424) 00:13:46.388 fused_ordering(425) 00:13:46.388 fused_ordering(426) 00:13:46.388 fused_ordering(427) 00:13:46.388 fused_ordering(428) 00:13:46.388 fused_ordering(429) 00:13:46.388 fused_ordering(430) 00:13:46.388 fused_ordering(431) 00:13:46.388 fused_ordering(432) 00:13:46.388 fused_ordering(433) 00:13:46.388 fused_ordering(434) 00:13:46.388 fused_ordering(435) 00:13:46.388 fused_ordering(436) 00:13:46.388 fused_ordering(437) 00:13:46.388 fused_ordering(438) 00:13:46.388 fused_ordering(439) 00:13:46.388 fused_ordering(440) 00:13:46.388 fused_ordering(441) 00:13:46.388 fused_ordering(442) 00:13:46.388 fused_ordering(443) 00:13:46.388 fused_ordering(444) 00:13:46.388 fused_ordering(445) 00:13:46.388 fused_ordering(446) 00:13:46.388 fused_ordering(447) 00:13:46.388 fused_ordering(448) 00:13:46.388 fused_ordering(449) 00:13:46.388 fused_ordering(450) 00:13:46.388 fused_ordering(451) 00:13:46.388 fused_ordering(452) 00:13:46.388 fused_ordering(453) 00:13:46.388 fused_ordering(454) 00:13:46.388 fused_ordering(455) 00:13:46.388 fused_ordering(456) 00:13:46.388 fused_ordering(457) 00:13:46.388 fused_ordering(458) 00:13:46.388 fused_ordering(459) 00:13:46.388 fused_ordering(460) 00:13:46.388 fused_ordering(461) 00:13:46.388 fused_ordering(462) 00:13:46.388 fused_ordering(463) 00:13:46.388 fused_ordering(464) 00:13:46.388 fused_ordering(465) 00:13:46.388 fused_ordering(466) 00:13:46.388 fused_ordering(467) 00:13:46.388 fused_ordering(468) 00:13:46.388 fused_ordering(469) 00:13:46.388 fused_ordering(470) 00:13:46.388 fused_ordering(471) 00:13:46.388 fused_ordering(472) 00:13:46.388 fused_ordering(473) 00:13:46.388 fused_ordering(474) 00:13:46.388 fused_ordering(475) 00:13:46.388 fused_ordering(476) 00:13:46.388 fused_ordering(477) 00:13:46.388 fused_ordering(478) 00:13:46.388 fused_ordering(479) 00:13:46.388 fused_ordering(480) 00:13:46.388 fused_ordering(481) 00:13:46.388 fused_ordering(482) 00:13:46.388 fused_ordering(483) 00:13:46.388 fused_ordering(484) 00:13:46.388 fused_ordering(485) 00:13:46.388 fused_ordering(486) 00:13:46.388 fused_ordering(487) 00:13:46.388 fused_ordering(488) 00:13:46.388 fused_ordering(489) 00:13:46.388 fused_ordering(490) 00:13:46.388 fused_ordering(491) 00:13:46.388 fused_ordering(492) 00:13:46.388 fused_ordering(493) 00:13:46.388 fused_ordering(494) 00:13:46.388 fused_ordering(495) 00:13:46.388 fused_ordering(496) 00:13:46.388 fused_ordering(497) 00:13:46.388 fused_ordering(498) 00:13:46.388 fused_ordering(499) 00:13:46.388 fused_ordering(500) 00:13:46.388 fused_ordering(501) 00:13:46.388 fused_ordering(502) 00:13:46.388 fused_ordering(503) 00:13:46.388 fused_ordering(504) 00:13:46.388 fused_ordering(505) 00:13:46.388 fused_ordering(506) 00:13:46.388 fused_ordering(507) 00:13:46.388 fused_ordering(508) 00:13:46.388 fused_ordering(509) 00:13:46.388 fused_ordering(510) 00:13:46.388 fused_ordering(511) 00:13:46.388 fused_ordering(512) 00:13:46.388 fused_ordering(513) 00:13:46.388 fused_ordering(514) 00:13:46.388 fused_ordering(515) 00:13:46.388 fused_ordering(516) 00:13:46.388 fused_ordering(517) 00:13:46.388 fused_ordering(518) 00:13:46.388 fused_ordering(519) 00:13:46.388 fused_ordering(520) 00:13:46.388 fused_ordering(521) 00:13:46.388 fused_ordering(522) 00:13:46.388 fused_ordering(523) 00:13:46.388 fused_ordering(524) 00:13:46.388 fused_ordering(525) 00:13:46.388 fused_ordering(526) 00:13:46.388 fused_ordering(527) 00:13:46.388 fused_ordering(528) 00:13:46.388 fused_ordering(529) 00:13:46.388 fused_ordering(530) 00:13:46.388 fused_ordering(531) 00:13:46.388 fused_ordering(532) 00:13:46.388 fused_ordering(533) 00:13:46.388 fused_ordering(534) 00:13:46.388 fused_ordering(535) 00:13:46.388 fused_ordering(536) 00:13:46.388 fused_ordering(537) 00:13:46.388 fused_ordering(538) 00:13:46.388 fused_ordering(539) 00:13:46.388 fused_ordering(540) 00:13:46.388 fused_ordering(541) 00:13:46.388 fused_ordering(542) 00:13:46.388 fused_ordering(543) 00:13:46.388 fused_ordering(544) 00:13:46.388 fused_ordering(545) 00:13:46.388 fused_ordering(546) 00:13:46.388 fused_ordering(547) 00:13:46.388 fused_ordering(548) 00:13:46.388 fused_ordering(549) 00:13:46.388 fused_ordering(550) 00:13:46.388 fused_ordering(551) 00:13:46.388 fused_ordering(552) 00:13:46.388 fused_ordering(553) 00:13:46.388 fused_ordering(554) 00:13:46.388 fused_ordering(555) 00:13:46.388 fused_ordering(556) 00:13:46.388 fused_ordering(557) 00:13:46.388 fused_ordering(558) 00:13:46.388 fused_ordering(559) 00:13:46.388 fused_ordering(560) 00:13:46.388 fused_ordering(561) 00:13:46.388 fused_ordering(562) 00:13:46.388 fused_ordering(563) 00:13:46.388 fused_ordering(564) 00:13:46.388 fused_ordering(565) 00:13:46.388 fused_ordering(566) 00:13:46.388 fused_ordering(567) 00:13:46.388 fused_ordering(568) 00:13:46.388 fused_ordering(569) 00:13:46.388 fused_ordering(570) 00:13:46.388 fused_ordering(571) 00:13:46.388 fused_ordering(572) 00:13:46.388 fused_ordering(573) 00:13:46.388 fused_ordering(574) 00:13:46.388 fused_ordering(575) 00:13:46.388 fused_ordering(576) 00:13:46.388 fused_ordering(577) 00:13:46.388 fused_ordering(578) 00:13:46.388 fused_ordering(579) 00:13:46.388 fused_ordering(580) 00:13:46.388 fused_ordering(581) 00:13:46.388 fused_ordering(582) 00:13:46.388 fused_ordering(583) 00:13:46.388 fused_ordering(584) 00:13:46.388 fused_ordering(585) 00:13:46.388 fused_ordering(586) 00:13:46.388 fused_ordering(587) 00:13:46.388 fused_ordering(588) 00:13:46.388 fused_ordering(589) 00:13:46.388 fused_ordering(590) 00:13:46.388 fused_ordering(591) 00:13:46.388 fused_ordering(592) 00:13:46.388 fused_ordering(593) 00:13:46.388 fused_ordering(594) 00:13:46.388 fused_ordering(595) 00:13:46.388 fused_ordering(596) 00:13:46.388 fused_ordering(597) 00:13:46.388 fused_ordering(598) 00:13:46.388 fused_ordering(599) 00:13:46.388 fused_ordering(600) 00:13:46.388 fused_ordering(601) 00:13:46.388 fused_ordering(602) 00:13:46.388 fused_ordering(603) 00:13:46.388 fused_ordering(604) 00:13:46.388 fused_ordering(605) 00:13:46.388 fused_ordering(606) 00:13:46.388 fused_ordering(607) 00:13:46.388 fused_ordering(608) 00:13:46.388 fused_ordering(609) 00:13:46.388 fused_ordering(610) 00:13:46.388 fused_ordering(611) 00:13:46.388 fused_ordering(612) 00:13:46.388 fused_ordering(613) 00:13:46.388 fused_ordering(614) 00:13:46.388 fused_ordering(615) 00:13:46.779 fused_ordering(616) 00:13:46.779 fused_ordering(617) 00:13:46.779 fused_ordering(618) 00:13:46.779 fused_ordering(619) 00:13:46.779 fused_ordering(620) 00:13:46.779 fused_ordering(621) 00:13:46.779 fused_ordering(622) 00:13:46.779 fused_ordering(623) 00:13:46.779 fused_ordering(624) 00:13:46.779 fused_ordering(625) 00:13:46.779 fused_ordering(626) 00:13:46.779 fused_ordering(627) 00:13:46.779 fused_ordering(628) 00:13:46.779 fused_ordering(629) 00:13:46.779 fused_ordering(630) 00:13:46.779 fused_ordering(631) 00:13:46.779 fused_ordering(632) 00:13:46.779 fused_ordering(633) 00:13:46.779 fused_ordering(634) 00:13:46.779 fused_ordering(635) 00:13:46.779 fused_ordering(636) 00:13:46.779 fused_ordering(637) 00:13:46.779 fused_ordering(638) 00:13:46.779 fused_ordering(639) 00:13:46.779 fused_ordering(640) 00:13:46.779 fused_ordering(641) 00:13:46.779 fused_ordering(642) 00:13:46.779 fused_ordering(643) 00:13:46.779 fused_ordering(644) 00:13:46.779 fused_ordering(645) 00:13:46.779 fused_ordering(646) 00:13:46.779 fused_ordering(647) 00:13:46.779 fused_ordering(648) 00:13:46.779 fused_ordering(649) 00:13:46.779 fused_ordering(650) 00:13:46.779 fused_ordering(651) 00:13:46.779 fused_ordering(652) 00:13:46.779 fused_ordering(653) 00:13:46.779 fused_ordering(654) 00:13:46.779 fused_ordering(655) 00:13:46.779 fused_ordering(656) 00:13:46.779 fused_ordering(657) 00:13:46.779 fused_ordering(658) 00:13:46.779 fused_ordering(659) 00:13:46.779 fused_ordering(660) 00:13:46.779 fused_ordering(661) 00:13:46.779 fused_ordering(662) 00:13:46.779 fused_ordering(663) 00:13:46.779 fused_ordering(664) 00:13:46.779 fused_ordering(665) 00:13:46.779 fused_ordering(666) 00:13:46.779 fused_ordering(667) 00:13:46.779 fused_ordering(668) 00:13:46.779 fused_ordering(669) 00:13:46.779 fused_ordering(670) 00:13:46.779 fused_ordering(671) 00:13:46.779 fused_ordering(672) 00:13:46.779 fused_ordering(673) 00:13:46.779 fused_ordering(674) 00:13:46.779 fused_ordering(675) 00:13:46.779 fused_ordering(676) 00:13:46.779 fused_ordering(677) 00:13:46.779 fused_ordering(678) 00:13:46.779 fused_ordering(679) 00:13:46.779 fused_ordering(680) 00:13:46.779 fused_ordering(681) 00:13:46.779 fused_ordering(682) 00:13:46.779 fused_ordering(683) 00:13:46.779 fused_ordering(684) 00:13:46.779 fused_ordering(685) 00:13:46.779 fused_ordering(686) 00:13:46.779 fused_ordering(687) 00:13:46.779 fused_ordering(688) 00:13:46.779 fused_ordering(689) 00:13:46.779 fused_ordering(690) 00:13:46.779 fused_ordering(691) 00:13:46.779 fused_ordering(692) 00:13:46.779 fused_ordering(693) 00:13:46.779 fused_ordering(694) 00:13:46.779 fused_ordering(695) 00:13:46.779 fused_ordering(696) 00:13:46.779 fused_ordering(697) 00:13:46.779 fused_ordering(698) 00:13:46.779 fused_ordering(699) 00:13:46.779 fused_ordering(700) 00:13:46.779 fused_ordering(701) 00:13:46.779 fused_ordering(702) 00:13:46.779 fused_ordering(703) 00:13:46.779 fused_ordering(704) 00:13:46.779 fused_ordering(705) 00:13:46.779 fused_ordering(706) 00:13:46.779 fused_ordering(707) 00:13:46.779 fused_ordering(708) 00:13:46.779 fused_ordering(709) 00:13:46.779 fused_ordering(710) 00:13:46.779 fused_ordering(711) 00:13:46.779 fused_ordering(712) 00:13:46.779 fused_ordering(713) 00:13:46.779 fused_ordering(714) 00:13:46.779 fused_ordering(715) 00:13:46.779 fused_ordering(716) 00:13:46.779 fused_ordering(717) 00:13:46.779 fused_ordering(718) 00:13:46.779 fused_ordering(719) 00:13:46.779 fused_ordering(720) 00:13:46.779 fused_ordering(721) 00:13:46.779 fused_ordering(722) 00:13:46.779 fused_ordering(723) 00:13:46.779 fused_ordering(724) 00:13:46.779 fused_ordering(725) 00:13:46.779 fused_ordering(726) 00:13:46.779 fused_ordering(727) 00:13:46.779 fused_ordering(728) 00:13:46.779 fused_ordering(729) 00:13:46.779 fused_ordering(730) 00:13:46.779 fused_ordering(731) 00:13:46.779 fused_ordering(732) 00:13:46.779 fused_ordering(733) 00:13:46.779 fused_ordering(734) 00:13:46.779 fused_ordering(735) 00:13:46.779 fused_ordering(736) 00:13:46.779 fused_ordering(737) 00:13:46.779 fused_ordering(738) 00:13:46.779 fused_ordering(739) 00:13:46.779 fused_ordering(740) 00:13:46.779 fused_ordering(741) 00:13:46.779 fused_ordering(742) 00:13:46.779 fused_ordering(743) 00:13:46.779 fused_ordering(744) 00:13:46.779 fused_ordering(745) 00:13:46.779 fused_ordering(746) 00:13:46.779 fused_ordering(747) 00:13:46.779 fused_ordering(748) 00:13:46.779 fused_ordering(749) 00:13:46.779 fused_ordering(750) 00:13:46.779 fused_ordering(751) 00:13:46.779 fused_ordering(752) 00:13:46.779 fused_ordering(753) 00:13:46.779 fused_ordering(754) 00:13:46.779 fused_ordering(755) 00:13:46.779 fused_ordering(756) 00:13:46.779 fused_ordering(757) 00:13:46.779 fused_ordering(758) 00:13:46.779 fused_ordering(759) 00:13:46.779 fused_ordering(760) 00:13:46.779 fused_ordering(761) 00:13:46.779 fused_ordering(762) 00:13:46.779 fused_ordering(763) 00:13:46.779 fused_ordering(764) 00:13:46.779 fused_ordering(765) 00:13:46.779 fused_ordering(766) 00:13:46.779 fused_ordering(767) 00:13:46.779 fused_ordering(768) 00:13:46.779 fused_ordering(769) 00:13:46.779 fused_ordering(770) 00:13:46.779 fused_ordering(771) 00:13:46.779 fused_ordering(772) 00:13:46.779 fused_ordering(773) 00:13:46.779 fused_ordering(774) 00:13:46.779 fused_ordering(775) 00:13:46.779 fused_ordering(776) 00:13:46.779 fused_ordering(777) 00:13:46.779 fused_ordering(778) 00:13:46.779 fused_ordering(779) 00:13:46.779 fused_ordering(780) 00:13:46.779 fused_ordering(781) 00:13:46.779 fused_ordering(782) 00:13:46.779 fused_ordering(783) 00:13:46.779 fused_ordering(784) 00:13:46.779 fused_ordering(785) 00:13:46.779 fused_ordering(786) 00:13:46.779 fused_ordering(787) 00:13:46.779 fused_ordering(788) 00:13:46.779 fused_ordering(789) 00:13:46.779 fused_ordering(790) 00:13:46.779 fused_ordering(791) 00:13:46.779 fused_ordering(792) 00:13:46.779 fused_ordering(793) 00:13:46.779 fused_ordering(794) 00:13:46.779 fused_ordering(795) 00:13:46.779 fused_ordering(796) 00:13:46.779 fused_ordering(797) 00:13:46.779 fused_ordering(798) 00:13:46.779 fused_ordering(799) 00:13:46.779 fused_ordering(800) 00:13:46.779 fused_ordering(801) 00:13:46.779 fused_ordering(802) 00:13:46.779 fused_ordering(803) 00:13:46.779 fused_ordering(804) 00:13:46.779 fused_ordering(805) 00:13:46.779 fused_ordering(806) 00:13:46.779 fused_ordering(807) 00:13:46.779 fused_ordering(808) 00:13:46.779 fused_ordering(809) 00:13:46.779 fused_ordering(810) 00:13:46.779 fused_ordering(811) 00:13:46.779 fused_ordering(812) 00:13:46.779 fused_ordering(813) 00:13:46.779 fused_ordering(814) 00:13:46.779 fused_ordering(815) 00:13:46.779 fused_ordering(816) 00:13:46.779 fused_ordering(817) 00:13:46.779 fused_ordering(818) 00:13:46.779 fused_ordering(819) 00:13:46.779 fused_ordering(820) 00:13:47.419 fused_ordering(821) 00:13:47.419 fused_ordering(822) 00:13:47.419 fused_ordering(823) 00:13:47.419 fused_ordering(824) 00:13:47.419 fused_ordering(825) 00:13:47.419 fused_ordering(826) 00:13:47.419 fused_ordering(827) 00:13:47.419 fused_ordering(828) 00:13:47.419 fused_ordering(829) 00:13:47.419 fused_ordering(830) 00:13:47.419 fused_ordering(831) 00:13:47.419 fused_ordering(832) 00:13:47.419 fused_ordering(833) 00:13:47.419 fused_ordering(834) 00:13:47.419 fused_ordering(835) 00:13:47.419 fused_ordering(836) 00:13:47.419 fused_ordering(837) 00:13:47.419 fused_ordering(838) 00:13:47.419 fused_ordering(839) 00:13:47.419 fused_ordering(840) 00:13:47.419 fused_ordering(841) 00:13:47.419 fused_ordering(842) 00:13:47.419 fused_ordering(843) 00:13:47.419 fused_ordering(844) 00:13:47.419 fused_ordering(845) 00:13:47.419 fused_ordering(846) 00:13:47.419 fused_ordering(847) 00:13:47.419 fused_ordering(848) 00:13:47.419 fused_ordering(849) 00:13:47.419 fused_ordering(850) 00:13:47.419 fused_ordering(851) 00:13:47.419 fused_ordering(852) 00:13:47.419 fused_ordering(853) 00:13:47.419 fused_ordering(854) 00:13:47.419 fused_ordering(855) 00:13:47.419 fused_ordering(856) 00:13:47.419 fused_ordering(857) 00:13:47.419 fused_ordering(858) 00:13:47.419 fused_ordering(859) 00:13:47.419 fused_ordering(860) 00:13:47.419 fused_ordering(861) 00:13:47.419 fused_ordering(862) 00:13:47.419 fused_ordering(863) 00:13:47.419 fused_ordering(864) 00:13:47.419 fused_ordering(865) 00:13:47.419 fused_ordering(866) 00:13:47.419 fused_ordering(867) 00:13:47.419 fused_ordering(868) 00:13:47.419 fused_ordering(869) 00:13:47.419 fused_ordering(870) 00:13:47.419 fused_ordering(871) 00:13:47.419 fused_ordering(872) 00:13:47.419 fused_ordering(873) 00:13:47.419 fused_ordering(874) 00:13:47.419 fused_ordering(875) 00:13:47.419 fused_ordering(876) 00:13:47.419 fused_ordering(877) 00:13:47.419 fused_ordering(878) 00:13:47.419 fused_ordering(879) 00:13:47.419 fused_ordering(880) 00:13:47.419 fused_ordering(881) 00:13:47.419 fused_ordering(882) 00:13:47.419 fused_ordering(883) 00:13:47.419 fused_ordering(884) 00:13:47.419 fused_ordering(885) 00:13:47.419 fused_ordering(886) 00:13:47.419 fused_ordering(887) 00:13:47.419 fused_ordering(888) 00:13:47.419 fused_ordering(889) 00:13:47.419 fused_ordering(890) 00:13:47.419 fused_ordering(891) 00:13:47.419 fused_ordering(892) 00:13:47.419 fused_ordering(893) 00:13:47.419 fused_ordering(894) 00:13:47.419 fused_ordering(895) 00:13:47.419 fused_ordering(896) 00:13:47.419 fused_ordering(897) 00:13:47.419 fused_ordering(898) 00:13:47.419 fused_ordering(899) 00:13:47.419 fused_ordering(900) 00:13:47.419 fused_ordering(901) 00:13:47.419 fused_ordering(902) 00:13:47.419 fused_ordering(903) 00:13:47.419 fused_ordering(904) 00:13:47.419 fused_ordering(905) 00:13:47.419 fused_ordering(906) 00:13:47.419 fused_ordering(907) 00:13:47.419 fused_ordering(908) 00:13:47.419 fused_ordering(909) 00:13:47.419 fused_ordering(910) 00:13:47.419 fused_ordering(911) 00:13:47.419 fused_ordering(912) 00:13:47.419 fused_ordering(913) 00:13:47.419 fused_ordering(914) 00:13:47.419 fused_ordering(915) 00:13:47.419 fused_ordering(916) 00:13:47.419 fused_ordering(917) 00:13:47.419 fused_ordering(918) 00:13:47.419 fused_ordering(919) 00:13:47.419 fused_ordering(920) 00:13:47.419 fused_ordering(921) 00:13:47.419 fused_ordering(922) 00:13:47.419 fused_ordering(923) 00:13:47.419 fused_ordering(924) 00:13:47.419 fused_ordering(925) 00:13:47.419 fused_ordering(926) 00:13:47.419 fused_ordering(927) 00:13:47.419 fused_ordering(928) 00:13:47.419 fused_ordering(929) 00:13:47.419 fused_ordering(930) 00:13:47.419 fused_ordering(931) 00:13:47.419 fused_ordering(932) 00:13:47.419 fused_ordering(933) 00:13:47.419 fused_ordering(934) 00:13:47.419 fused_ordering(935) 00:13:47.419 fused_ordering(936) 00:13:47.419 fused_ordering(937) 00:13:47.419 fused_ordering(938) 00:13:47.419 fused_ordering(939) 00:13:47.419 fused_ordering(940) 00:13:47.419 fused_ordering(941) 00:13:47.419 fused_ordering(942) 00:13:47.419 fused_ordering(943) 00:13:47.419 fused_ordering(944) 00:13:47.419 fused_ordering(945) 00:13:47.419 fused_ordering(946) 00:13:47.419 fused_ordering(947) 00:13:47.419 fused_ordering(948) 00:13:47.419 fused_ordering(949) 00:13:47.419 fused_ordering(950) 00:13:47.419 fused_ordering(951) 00:13:47.419 fused_ordering(952) 00:13:47.419 fused_ordering(953) 00:13:47.419 fused_ordering(954) 00:13:47.419 fused_ordering(955) 00:13:47.419 fused_ordering(956) 00:13:47.419 fused_ordering(957) 00:13:47.419 fused_ordering(958) 00:13:47.419 fused_ordering(959) 00:13:47.419 fused_ordering(960) 00:13:47.419 fused_ordering(961) 00:13:47.419 fused_ordering(962) 00:13:47.419 fused_ordering(963) 00:13:47.419 fused_ordering(964) 00:13:47.419 fused_ordering(965) 00:13:47.419 fused_ordering(966) 00:13:47.419 fused_ordering(967) 00:13:47.419 fused_ordering(968) 00:13:47.419 fused_ordering(969) 00:13:47.419 fused_ordering(970) 00:13:47.419 fused_ordering(971) 00:13:47.420 fused_ordering(972) 00:13:47.420 fused_ordering(973) 00:13:47.420 fused_ordering(974) 00:13:47.420 fused_ordering(975) 00:13:47.420 fused_ordering(976) 00:13:47.420 fused_ordering(977) 00:13:47.420 fused_ordering(978) 00:13:47.420 fused_ordering(979) 00:13:47.420 fused_ordering(980) 00:13:47.420 fused_ordering(981) 00:13:47.420 fused_ordering(982) 00:13:47.420 fused_ordering(983) 00:13:47.420 fused_ordering(984) 00:13:47.420 fused_ordering(985) 00:13:47.420 fused_ordering(986) 00:13:47.420 fused_ordering(987) 00:13:47.420 fused_ordering(988) 00:13:47.420 fused_ordering(989) 00:13:47.420 fused_ordering(990) 00:13:47.420 fused_ordering(991) 00:13:47.420 fused_ordering(992) 00:13:47.420 fused_ordering(993) 00:13:47.420 fused_ordering(994) 00:13:47.420 fused_ordering(995) 00:13:47.420 fused_ordering(996) 00:13:47.420 fused_ordering(997) 00:13:47.420 fused_ordering(998) 00:13:47.420 fused_ordering(999) 00:13:47.420 fused_ordering(1000) 00:13:47.420 fused_ordering(1001) 00:13:47.420 fused_ordering(1002) 00:13:47.420 fused_ordering(1003) 00:13:47.420 fused_ordering(1004) 00:13:47.420 fused_ordering(1005) 00:13:47.420 fused_ordering(1006) 00:13:47.420 fused_ordering(1007) 00:13:47.420 fused_ordering(1008) 00:13:47.420 fused_ordering(1009) 00:13:47.420 fused_ordering(1010) 00:13:47.420 fused_ordering(1011) 00:13:47.420 fused_ordering(1012) 00:13:47.420 fused_ordering(1013) 00:13:47.420 fused_ordering(1014) 00:13:47.420 fused_ordering(1015) 00:13:47.420 fused_ordering(1016) 00:13:47.420 fused_ordering(1017) 00:13:47.420 fused_ordering(1018) 00:13:47.420 fused_ordering(1019) 00:13:47.420 fused_ordering(1020) 00:13:47.420 fused_ordering(1021) 00:13:47.420 fused_ordering(1022) 00:13:47.420 fused_ordering(1023) 00:13:47.420 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:47.420 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:47.420 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:47.420 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # sync 00:13:47.420 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:13:47.420 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # set +e 00:13:47.420 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # for i in {1..20} 00:13:47.420 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:13:47.420 rmmod nvme_tcp 00:13:47.420 rmmod nvme_fabrics 00:13:47.420 rmmod nvme_keyring 00:13:47.420 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:13:47.420 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # set -e 00:13:47.420 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@130 -- # return 0 00:13:47.420 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@513 -- # '[' -n 4175109 ']' 00:13:47.420 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # killprocess 4175109 00:13:47.420 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 4175109 ']' 00:13:47.420 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 4175109 00:13:47.420 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:47.420 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:47.420 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4175109 00:13:47.701 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:47.701 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:47.701 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4175109' 00:13:47.701 killing process with pid 4175109 00:13:47.701 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 4175109 00:13:47.701 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 4175109 00:13:47.701 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:47.701 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:47.701 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:47.701 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # iptr 00:13:47.701 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-save 00:13:47.701 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:47.701 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-restore 00:13:47.701 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:47.701 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # remove_spdk_ns 00:13:47.701 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.701 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:47.701 11:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.789 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:13:49.789 00:13:49.789 real 0m13.477s 00:13:49.789 user 0m7.150s 00:13:49.789 sys 0m7.179s 00:13:49.789 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.789 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.789 ************************************ 00:13:49.789 END TEST nvmf_fused_ordering 00:13:49.789 ************************************ 00:13:49.789 11:48:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:49.789 11:48:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:49.789 11:48:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.789 11:48:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:49.789 ************************************ 00:13:49.789 START TEST nvmf_ns_masking 00:13:49.789 ************************************ 00:13:49.789 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:49.789 * Looking for test storage... 00:13:49.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.789 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:49.789 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:13:49.789 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:50.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.052 --rc genhtml_branch_coverage=1 00:13:50.052 --rc genhtml_function_coverage=1 00:13:50.052 --rc genhtml_legend=1 00:13:50.052 --rc geninfo_all_blocks=1 00:13:50.052 --rc geninfo_unexecuted_blocks=1 00:13:50.052 00:13:50.052 ' 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:50.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.052 --rc genhtml_branch_coverage=1 00:13:50.052 --rc genhtml_function_coverage=1 00:13:50.052 --rc genhtml_legend=1 00:13:50.052 --rc geninfo_all_blocks=1 00:13:50.052 --rc geninfo_unexecuted_blocks=1 00:13:50.052 00:13:50.052 ' 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:50.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.052 --rc genhtml_branch_coverage=1 00:13:50.052 --rc genhtml_function_coverage=1 00:13:50.052 --rc genhtml_legend=1 00:13:50.052 --rc geninfo_all_blocks=1 00:13:50.052 --rc geninfo_unexecuted_blocks=1 00:13:50.052 00:13:50.052 ' 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:50.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.052 --rc genhtml_branch_coverage=1 00:13:50.052 --rc genhtml_function_coverage=1 00:13:50.052 --rc genhtml_legend=1 00:13:50.052 --rc geninfo_all_blocks=1 00:13:50.052 --rc geninfo_unexecuted_blocks=1 00:13:50.052 00:13:50.052 ' 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # : 0 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:13:50.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:13:50.052 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@56 -- # have_pci_nics=0 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=4fc04928-e54d-4eb3-8daf-c761eae47390 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=0e0e45c5-5bea-4945-82d4-006b9b048191 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=f8b51de4-6148-4dfe-a1d0-fcfad6471303 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # xtrace_disable 00:13:50.053 11:48:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_devs=() 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_devs 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_net_devs=() 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # pci_drivers=() 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # local -A pci_drivers 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # net_devs=() 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga net_devs 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # e810=() 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga e810 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # x722=() 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga x722 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@323 -- # mlx=() 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@323 -- # local -ga mlx 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:58.194 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:58.194 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:58.194 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:58.194 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # is_hw=yes 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:58.194 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:13:58.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:58.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:13:58.195 00:13:58.195 --- 10.0.0.2 ping statistics --- 00:13:58.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.195 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:58.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:58.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:13:58.195 00:13:58.195 --- 10.0.0.1 ping statistics --- 00:13:58.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.195 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # return 0 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # nvmfpid=4179902 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # waitforlisten 4179902 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 4179902 ']' 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.195 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:58.195 [2024-12-09 11:49:05.251624] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:13:58.195 [2024-12-09 11:49:05.251700] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.195 [2024-12-09 11:49:05.351292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.195 [2024-12-09 11:49:05.401558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.195 [2024-12-09 11:49:05.401612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.195 [2024-12-09 11:49:05.401620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.195 [2024-12-09 11:49:05.401627] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.195 [2024-12-09 11:49:05.401634] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.195 [2024-12-09 11:49:05.402443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.195 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.195 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:58.195 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:58.195 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:58.195 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:58.455 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.455 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:58.455 [2024-12-09 11:49:06.266100] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.455 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:58.455 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:58.455 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:58.716 Malloc1 00:13:58.716 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:58.976 Malloc2 00:13:58.976 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:59.238 11:49:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:59.238 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:59.500 [2024-12-09 11:49:07.205111] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.500 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:59.500 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f8b51de4-6148-4dfe-a1d0-fcfad6471303 -a 10.0.0.2 -s 4420 -i 4 00:13:59.761 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:59.761 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:59.761 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:59.761 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:59.761 11:49:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:01.677 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:01.677 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:01.677 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:01.677 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:01.677 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:01.677 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:01.677 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:01.677 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:01.677 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:01.677 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:01.677 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:01.677 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.677 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:01.938 [ 0]:0x1 00:14:01.938 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:01.938 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.938 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6363cd67c12e461c8487cd7c9d245092 00:14:01.938 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6363cd67c12e461c8487cd7c9d245092 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.938 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:01.938 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:01.938 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.938 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:01.938 [ 0]:0x1 00:14:01.938 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:01.938 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.198 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6363cd67c12e461c8487cd7c9d245092 00:14:02.198 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6363cd67c12e461c8487cd7c9d245092 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.198 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:02.198 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.199 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:02.199 [ 1]:0x2 00:14:02.199 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:02.199 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.199 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccf369d8b2df41de887496906ad561cb 00:14:02.199 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccf369d8b2df41de887496906ad561cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.199 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:02.199 11:49:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:02.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.199 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.460 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:02.721 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:02.721 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f8b51de4-6148-4dfe-a1d0-fcfad6471303 -a 10.0.0.2 -s 4420 -i 4 00:14:02.721 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:02.721 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:02.721 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:02.721 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:02.721 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:02.721 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:05.270 [ 0]:0x2 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccf369d8b2df41de887496906ad561cb 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccf369d8b2df41de887496906ad561cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:05.270 11:49:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:05.270 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:05.270 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:05.270 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:05.270 [ 0]:0x1 00:14:05.270 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:05.270 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:05.532 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6363cd67c12e461c8487cd7c9d245092 00:14:05.532 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6363cd67c12e461c8487cd7c9d245092 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:05.532 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:05.532 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:05.532 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:05.532 [ 1]:0x2 00:14:05.532 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:05.532 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:05.532 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccf369d8b2df41de887496906ad561cb 00:14:05.532 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccf369d8b2df41de887496906ad561cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:05.532 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:05.793 [ 0]:0x2 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccf369d8b2df41de887496906ad561cb 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccf369d8b2df41de887496906ad561cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:05.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.793 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:06.054 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:06.054 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f8b51de4-6148-4dfe-a1d0-fcfad6471303 -a 10.0.0.2 -s 4420 -i 4 00:14:06.315 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:06.315 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:06.315 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:06.315 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:06.315 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:06.315 11:49:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:08.229 11:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:08.229 11:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:08.229 11:49:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:08.229 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:08.229 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:08.229 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:08.229 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:08.229 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:08.229 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:08.229 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:08.229 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:08.229 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.229 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:08.229 [ 0]:0x1 00:14:08.229 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:08.229 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.229 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6363cd67c12e461c8487cd7c9d245092 00:14:08.229 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6363cd67c12e461c8487cd7c9d245092 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.229 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:08.229 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.229 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:08.490 [ 1]:0x2 00:14:08.490 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:08.490 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.490 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccf369d8b2df41de887496906ad561cb 00:14:08.490 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccf369d8b2df41de887496906ad561cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.490 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:08.490 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:08.490 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:08.490 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:08.490 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:08.490 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:08.490 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:08.490 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:08.490 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:08.490 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.490 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:08.490 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:08.490 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:08.750 [ 0]:0x2 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccf369d8b2df41de887496906ad561cb 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccf369d8b2df41de887496906ad561cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:08.750 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:08.750 [2024-12-09 11:49:16.635239] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:09.012 request: 00:14:09.012 { 00:14:09.012 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:09.012 "nsid": 2, 00:14:09.012 "host": "nqn.2016-06.io.spdk:host1", 00:14:09.012 "method": "nvmf_ns_remove_host", 00:14:09.012 "req_id": 1 00:14:09.012 } 00:14:09.012 Got JSON-RPC error response 00:14:09.012 response: 00:14:09.012 { 00:14:09.012 "code": -32602, 00:14:09.012 "message": "Invalid parameters" 00:14:09.012 } 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:09.012 [ 0]:0x2 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccf369d8b2df41de887496906ad561cb 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccf369d8b2df41de887496906ad561cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:09.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=4182405 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 4182405 /var/tmp/host.sock 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 4182405 ']' 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:09.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.012 11:49:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:09.273 [2024-12-09 11:49:16.917319] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:14:09.273 [2024-12-09 11:49:16.917370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4182405 ] 00:14:09.273 [2024-12-09 11:49:17.007107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.273 [2024-12-09 11:49:17.042595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.842 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:09.842 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:09.843 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.103 11:49:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:10.364 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 4fc04928-e54d-4eb3-8daf-c761eae47390 00:14:10.364 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:14:10.364 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4FC04928E54D4EB38DAFC761EAE47390 -i 00:14:10.364 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 0e0e45c5-5bea-4945-82d4-006b9b048191 00:14:10.364 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:14:10.364 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 0E0E45C55BEA494582D4006B9B048191 -i 00:14:10.625 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:10.886 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:10.886 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:10.886 11:49:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:11.458 nvme0n1 00:14:11.458 11:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:11.458 11:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:11.718 nvme1n2 00:14:11.718 11:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:11.718 11:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:11.718 11:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:11.718 11:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:11.718 11:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:11.979 11:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:11.979 11:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:11.979 11:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:11.979 11:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:12.241 11:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 4fc04928-e54d-4eb3-8daf-c761eae47390 == \4\f\c\0\4\9\2\8\-\e\5\4\d\-\4\e\b\3\-\8\d\a\f\-\c\7\6\1\e\a\e\4\7\3\9\0 ]] 00:14:12.241 11:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:12.241 11:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:12.241 11:49:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:12.241 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 0e0e45c5-5bea-4945-82d4-006b9b048191 == \0\e\0\e\4\5\c\5\-\5\b\e\a\-\4\9\4\5\-\8\2\d\4\-\0\0\6\b\9\b\0\4\8\1\9\1 ]] 00:14:12.241 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.501 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:12.762 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 4fc04928-e54d-4eb3-8daf-c761eae47390 00:14:12.762 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:14:12.762 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4FC04928E54D4EB38DAFC761EAE47390 00:14:12.762 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:12.762 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4FC04928E54D4EB38DAFC761EAE47390 00:14:12.762 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:12.762 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.762 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:12.762 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.762 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:12.762 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.763 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:12.763 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:12.763 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4FC04928E54D4EB38DAFC761EAE47390 00:14:12.763 [2024-12-09 11:49:20.569961] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:12.763 [2024-12-09 11:49:20.569988] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:12.763 [2024-12-09 11:49:20.569995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.763 request: 00:14:12.763 { 00:14:12.763 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.763 "namespace": { 00:14:12.763 "bdev_name": "invalid", 00:14:12.763 "nsid": 1, 00:14:12.763 "nguid": "4FC04928E54D4EB38DAFC761EAE47390", 00:14:12.763 "no_auto_visible": false, 00:14:12.763 "hide_metadata": false 00:14:12.763 }, 00:14:12.763 "method": "nvmf_subsystem_add_ns", 00:14:12.763 "req_id": 1 00:14:12.763 } 00:14:12.763 Got JSON-RPC error response 00:14:12.763 response: 00:14:12.763 { 00:14:12.763 "code": -32602, 00:14:12.763 "message": "Invalid parameters" 00:14:12.763 } 00:14:12.763 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:12.763 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:12.763 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:12.763 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:12.763 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 4fc04928-e54d-4eb3-8daf-c761eae47390 00:14:12.763 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:14:12.763 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4FC04928E54D4EB38DAFC761EAE47390 -i 00:14:13.024 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:14.940 11:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:14.940 11:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:14.940 11:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:15.201 11:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:15.201 11:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 4182405 00:14:15.201 11:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 4182405 ']' 00:14:15.201 11:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 4182405 00:14:15.201 11:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:15.201 11:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:15.201 11:49:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4182405 00:14:15.201 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:15.201 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:15.201 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4182405' 00:14:15.201 killing process with pid 4182405 00:14:15.201 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 4182405 00:14:15.201 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 4182405 00:14:15.462 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:15.723 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:15.723 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:15.723 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:15.723 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # sync 00:14:15.723 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:14:15.723 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # set +e 00:14:15.723 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # for i in {1..20} 00:14:15.723 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:14:15.723 rmmod nvme_tcp 00:14:15.723 rmmod nvme_fabrics 00:14:15.723 rmmod nvme_keyring 00:14:15.723 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:14:15.723 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # set -e 00:14:15.723 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@130 -- # return 0 00:14:15.723 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@513 -- # '[' -n 4179902 ']' 00:14:15.723 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # killprocess 4179902 00:14:15.723 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 4179902 ']' 00:14:15.723 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 4179902 00:14:15.723 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:15.723 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:15.723 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4179902 00:14:15.723 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:15.723 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:15.723 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4179902' 00:14:15.723 killing process with pid 4179902 00:14:15.723 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 4179902 00:14:15.723 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 4179902 00:14:15.984 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:15.984 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:15.984 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:15.984 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # iptr 00:14:15.984 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-save 00:14:15.984 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:15.984 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-restore 00:14:15.984 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:15.984 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # remove_spdk_ns 00:14:15.984 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.984 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:15.984 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.903 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:14:17.903 00:14:17.903 real 0m28.169s 00:14:17.903 user 0m31.979s 00:14:17.903 sys 0m8.162s 00:14:17.903 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:17.903 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:17.903 ************************************ 00:14:17.903 END TEST nvmf_ns_masking 00:14:17.903 ************************************ 00:14:17.903 11:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:17.903 11:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:17.903 11:49:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:17.903 11:49:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:17.903 11:49:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:18.166 ************************************ 00:14:18.166 START TEST nvmf_nvme_cli 00:14:18.166 ************************************ 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:18.166 * Looking for test storage... 00:14:18.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:18.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.166 --rc genhtml_branch_coverage=1 00:14:18.166 --rc genhtml_function_coverage=1 00:14:18.166 --rc genhtml_legend=1 00:14:18.166 --rc geninfo_all_blocks=1 00:14:18.166 --rc geninfo_unexecuted_blocks=1 00:14:18.166 00:14:18.166 ' 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:18.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.166 --rc genhtml_branch_coverage=1 00:14:18.166 --rc genhtml_function_coverage=1 00:14:18.166 --rc genhtml_legend=1 00:14:18.166 --rc geninfo_all_blocks=1 00:14:18.166 --rc geninfo_unexecuted_blocks=1 00:14:18.166 00:14:18.166 ' 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:18.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.166 --rc genhtml_branch_coverage=1 00:14:18.166 --rc genhtml_function_coverage=1 00:14:18.166 --rc genhtml_legend=1 00:14:18.166 --rc geninfo_all_blocks=1 00:14:18.166 --rc geninfo_unexecuted_blocks=1 00:14:18.166 00:14:18.166 ' 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:18.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.166 --rc genhtml_branch_coverage=1 00:14:18.166 --rc genhtml_function_coverage=1 00:14:18.166 --rc genhtml_legend=1 00:14:18.166 --rc geninfo_all_blocks=1 00:14:18.166 --rc geninfo_unexecuted_blocks=1 00:14:18.166 00:14:18.166 ' 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:18.166 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:18.166 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.166 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.166 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.166 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # : 0 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:14:18.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@56 -- # have_pci_nics=0 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # xtrace_disable 00:14:18.167 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.314 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_devs=() 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_devs 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_net_devs=() 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # pci_drivers=() 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # local -A pci_drivers 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # net_devs=() 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga net_devs 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # e810=() 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga e810 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # x722=() 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga x722 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@323 -- # mlx=() 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@323 -- # local -ga mlx 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:26.315 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:26.315 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:26.315 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:26.315 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # is_hw=yes 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:14:26.315 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:26.315 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:26.315 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:26.315 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:14:26.315 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:26.315 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:26.315 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:26.315 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:26.315 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:14:26.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:14:26.315 00:14:26.315 --- 10.0.0.2 ping statistics --- 00:14:26.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.315 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:14:26.315 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:26.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:14:26.315 00:14:26.315 --- 10.0.0.1 ping statistics --- 00:14:26.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.315 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:14:26.315 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.315 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # return 0 00:14:26.315 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:26.315 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.316 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:26.316 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:26.316 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.316 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:26.316 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:26.316 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:26.316 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:26.316 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:26.316 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.316 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # nvmfpid=4187792 00:14:26.316 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # waitforlisten 4187792 00:14:26.316 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:26.316 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 4187792 ']' 00:14:26.316 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.316 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.316 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.316 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.316 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.316 [2024-12-09 11:49:33.357610] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:14:26.316 [2024-12-09 11:49:33.357700] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.316 [2024-12-09 11:49:33.454705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:26.316 [2024-12-09 11:49:33.509211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.316 [2024-12-09 11:49:33.509267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.316 [2024-12-09 11:49:33.509275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.316 [2024-12-09 11:49:33.509283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.316 [2024-12-09 11:49:33.509289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.316 [2024-12-09 11:49:33.511261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.316 [2024-12-09 11:49:33.511395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.316 [2024-12-09 11:49:33.511563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.316 [2024-12-09 11:49:33.511563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.316 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:26.316 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:26.316 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:26.316 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:26.316 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.576 [2024-12-09 11:49:34.213444] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.576 Malloc0 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.576 Malloc1 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.576 [2024-12-09 11:49:34.317181] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.576 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:26.837 00:14:26.837 Discovery Log Number of Records 2, Generation counter 2 00:14:26.837 =====Discovery Log Entry 0====== 00:14:26.837 trtype: tcp 00:14:26.837 adrfam: ipv4 00:14:26.837 subtype: current discovery subsystem 00:14:26.837 treq: not required 00:14:26.837 portid: 0 00:14:26.837 trsvcid: 4420 00:14:26.837 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:26.837 traddr: 10.0.0.2 00:14:26.837 eflags: explicit discovery connections, duplicate discovery information 00:14:26.837 sectype: none 00:14:26.837 =====Discovery Log Entry 1====== 00:14:26.837 trtype: tcp 00:14:26.837 adrfam: ipv4 00:14:26.837 subtype: nvme subsystem 00:14:26.837 treq: not required 00:14:26.837 portid: 0 00:14:26.837 trsvcid: 4420 00:14:26.837 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:26.837 traddr: 10.0.0.2 00:14:26.837 eflags: none 00:14:26.837 sectype: none 00:14:26.837 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:26.837 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:26.837 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:14:26.837 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:26.837 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:14:26.837 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:14:26.837 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:26.837 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:14:26.837 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:26.837 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:26.837 11:49:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:28.225 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:28.225 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:28.225 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:28.225 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:28.225 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:28.225 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:30.770 /dev/nvme0n2 ]] 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:30.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:30.770 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # sync 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # set +e 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # for i in {1..20} 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:14:31.032 rmmod nvme_tcp 00:14:31.032 rmmod nvme_fabrics 00:14:31.032 rmmod nvme_keyring 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # set -e 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@130 -- # return 0 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@513 -- # '[' -n 4187792 ']' 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # killprocess 4187792 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 4187792 ']' 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 4187792 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4187792 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4187792' 00:14:31.032 killing process with pid 4187792 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 4187792 00:14:31.032 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 4187792 00:14:31.293 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:31.293 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:31.293 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:31.293 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # iptr 00:14:31.293 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-save 00:14:31.293 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:31.293 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-restore 00:14:31.293 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:31.293 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # remove_spdk_ns 00:14:31.293 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.293 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.293 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.206 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:14:33.206 00:14:33.206 real 0m15.213s 00:14:33.206 user 0m23.826s 00:14:33.206 sys 0m6.208s 00:14:33.206 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.207 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:33.207 ************************************ 00:14:33.207 END TEST nvmf_nvme_cli 00:14:33.207 ************************************ 00:14:33.207 11:49:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:33.207 11:49:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:33.207 11:49:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:33.207 11:49:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:33.207 11:49:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:33.468 ************************************ 00:14:33.468 START TEST nvmf_vfio_user 00:14:33.468 ************************************ 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:33.468 * Looking for test storage... 00:14:33.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:33.468 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:33.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.469 --rc genhtml_branch_coverage=1 00:14:33.469 --rc genhtml_function_coverage=1 00:14:33.469 --rc genhtml_legend=1 00:14:33.469 --rc geninfo_all_blocks=1 00:14:33.469 --rc geninfo_unexecuted_blocks=1 00:14:33.469 00:14:33.469 ' 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:33.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.469 --rc genhtml_branch_coverage=1 00:14:33.469 --rc genhtml_function_coverage=1 00:14:33.469 --rc genhtml_legend=1 00:14:33.469 --rc geninfo_all_blocks=1 00:14:33.469 --rc geninfo_unexecuted_blocks=1 00:14:33.469 00:14:33.469 ' 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:33.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.469 --rc genhtml_branch_coverage=1 00:14:33.469 --rc genhtml_function_coverage=1 00:14:33.469 --rc genhtml_legend=1 00:14:33.469 --rc geninfo_all_blocks=1 00:14:33.469 --rc geninfo_unexecuted_blocks=1 00:14:33.469 00:14:33.469 ' 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:33.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.469 --rc genhtml_branch_coverage=1 00:14:33.469 --rc genhtml_function_coverage=1 00:14:33.469 --rc genhtml_legend=1 00:14:33.469 --rc geninfo_all_blocks=1 00:14:33.469 --rc geninfo_unexecuted_blocks=1 00:14:33.469 00:14:33.469 ' 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # : 0 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:14:33.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@56 -- # have_pci_nics=0 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:33.469 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:33.730 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:33.730 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:33.730 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:33.730 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:33.730 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4189605 00:14:33.730 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4189605' 00:14:33.730 Process pid: 4189605 00:14:33.730 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:33.730 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4189605 00:14:33.730 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 4189605 ']' 00:14:33.730 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:33.730 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.730 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:33.730 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.730 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:33.731 11:49:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:33.731 [2024-12-09 11:49:41.413657] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:14:33.731 [2024-12-09 11:49:41.413740] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.731 [2024-12-09 11:49:41.499886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:33.731 [2024-12-09 11:49:41.534576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.731 [2024-12-09 11:49:41.534611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.731 [2024-12-09 11:49:41.534617] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.731 [2024-12-09 11:49:41.534622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.731 [2024-12-09 11:49:41.534629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.731 [2024-12-09 11:49:41.536176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.731 [2024-12-09 11:49:41.536295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.731 [2024-12-09 11:49:41.536452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.731 [2024-12-09 11:49:41.536454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:34.670 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.670 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:34.670 11:49:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:35.613 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:35.613 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:35.613 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:35.613 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:35.613 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:35.613 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:35.873 Malloc1 00:14:35.873 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:36.133 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:36.133 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:36.394 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:36.394 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:36.394 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:36.655 Malloc2 00:14:36.655 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:36.917 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:36.917 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:37.179 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:37.179 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:37.179 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:37.179 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:37.179 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:37.179 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:37.180 [2024-12-09 11:49:44.956140] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:14:37.180 [2024-12-09 11:49:44.956185] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4190305 ] 00:14:37.180 [2024-12-09 11:49:44.994930] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:37.180 [2024-12-09 11:49:45.003910] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:37.180 [2024-12-09 11:49:45.003927] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f04519a2000 00:14:37.180 [2024-12-09 11:49:45.004907] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:37.180 [2024-12-09 11:49:45.005912] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:37.180 [2024-12-09 11:49:45.006920] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:37.180 [2024-12-09 11:49:45.007927] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:37.180 [2024-12-09 11:49:45.008937] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:37.180 [2024-12-09 11:49:45.009944] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:37.180 [2024-12-09 11:49:45.010964] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:37.180 [2024-12-09 11:49:45.011950] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:37.180 [2024-12-09 11:49:45.012965] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:37.180 [2024-12-09 11:49:45.012972] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0451997000 00:14:37.180 [2024-12-09 11:49:45.013885] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:37.180 [2024-12-09 11:49:45.023327] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:37.180 [2024-12-09 11:49:45.023350] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:37.180 [2024-12-09 11:49:45.028060] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:37.180 [2024-12-09 11:49:45.028096] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:37.180 [2024-12-09 11:49:45.028161] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:37.180 [2024-12-09 11:49:45.028176] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:37.180 [2024-12-09 11:49:45.028180] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:37.180 [2024-12-09 11:49:45.029064] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:37.180 [2024-12-09 11:49:45.029074] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:37.180 [2024-12-09 11:49:45.029080] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:37.180 [2024-12-09 11:49:45.030066] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:37.180 [2024-12-09 11:49:45.030072] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:37.180 [2024-12-09 11:49:45.030077] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:37.180 [2024-12-09 11:49:45.031070] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:37.180 [2024-12-09 11:49:45.031076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:37.180 [2024-12-09 11:49:45.032074] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:37.180 [2024-12-09 11:49:45.032080] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:37.180 [2024-12-09 11:49:45.032084] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:37.180 [2024-12-09 11:49:45.032089] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:37.180 [2024-12-09 11:49:45.032194] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:37.180 [2024-12-09 11:49:45.032198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:37.180 [2024-12-09 11:49:45.032201] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:37.180 [2024-12-09 11:49:45.033077] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:37.180 [2024-12-09 11:49:45.034082] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:37.180 [2024-12-09 11:49:45.035091] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:37.180 [2024-12-09 11:49:45.036095] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:37.180 [2024-12-09 11:49:45.036147] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:37.180 [2024-12-09 11:49:45.037105] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:37.180 [2024-12-09 11:49:45.037110] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:37.180 [2024-12-09 11:49:45.037114] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:37.180 [2024-12-09 11:49:45.037129] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:37.180 [2024-12-09 11:49:45.037134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:37.180 [2024-12-09 11:49:45.037149] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:37.180 [2024-12-09 11:49:45.037152] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:37.180 [2024-12-09 11:49:45.037155] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:37.180 [2024-12-09 11:49:45.037166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:37.180 [2024-12-09 11:49:45.037197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:37.180 [2024-12-09 11:49:45.037205] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:37.180 [2024-12-09 11:49:45.037209] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:37.180 [2024-12-09 11:49:45.037212] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:37.180 [2024-12-09 11:49:45.037215] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:37.180 [2024-12-09 11:49:45.037219] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:37.180 [2024-12-09 11:49:45.037222] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:37.180 [2024-12-09 11:49:45.037225] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:37.180 [2024-12-09 11:49:45.037231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:37.180 [2024-12-09 11:49:45.037238] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:37.180 [2024-12-09 11:49:45.037248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:37.180 [2024-12-09 11:49:45.037256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.180 [2024-12-09 11:49:45.037262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.180 [2024-12-09 11:49:45.037268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.180 [2024-12-09 11:49:45.037274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.180 [2024-12-09 11:49:45.037277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:37.180 [2024-12-09 11:49:45.037284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:37.180 [2024-12-09 11:49:45.037291] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:37.180 [2024-12-09 11:49:45.037300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:37.180 [2024-12-09 11:49:45.037305] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:37.180 [2024-12-09 11:49:45.037308] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:37.180 [2024-12-09 11:49:45.037315] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:37.180 [2024-12-09 11:49:45.037319] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:37.180 [2024-12-09 11:49:45.037325] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:37.180 [2024-12-09 11:49:45.037334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:37.180 [2024-12-09 11:49:45.037377] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:37.180 [2024-12-09 11:49:45.037383] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:37.181 [2024-12-09 11:49:45.037389] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:37.181 [2024-12-09 11:49:45.037392] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:37.181 [2024-12-09 11:49:45.037394] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:37.181 [2024-12-09 11:49:45.037398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:37.181 [2024-12-09 11:49:45.037411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:37.181 [2024-12-09 11:49:45.037418] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:37.181 [2024-12-09 11:49:45.037430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:37.181 [2024-12-09 11:49:45.037436] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:37.181 [2024-12-09 11:49:45.037441] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:37.181 [2024-12-09 11:49:45.037444] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:37.181 [2024-12-09 11:49:45.037446] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:37.181 [2024-12-09 11:49:45.037450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:37.181 [2024-12-09 11:49:45.037471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:37.181 [2024-12-09 11:49:45.037480] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:37.181 [2024-12-09 11:49:45.037486] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:37.181 [2024-12-09 11:49:45.037491] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:37.181 [2024-12-09 11:49:45.037494] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:37.181 [2024-12-09 11:49:45.037496] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:37.181 [2024-12-09 11:49:45.037501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:37.181 [2024-12-09 11:49:45.037515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:37.181 [2024-12-09 11:49:45.037521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:37.181 [2024-12-09 11:49:45.037527] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:37.181 [2024-12-09 11:49:45.037533] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:37.181 [2024-12-09 11:49:45.037538] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:37.181 [2024-12-09 11:49:45.037542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:37.181 [2024-12-09 11:49:45.037546] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:37.181 [2024-12-09 11:49:45.037550] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:37.181 [2024-12-09 11:49:45.037553] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:37.181 [2024-12-09 11:49:45.037557] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:37.181 [2024-12-09 11:49:45.037571] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:37.181 [2024-12-09 11:49:45.037581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:37.181 [2024-12-09 11:49:45.037589] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:37.181 [2024-12-09 11:49:45.037597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:37.181 [2024-12-09 11:49:45.037605] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:37.181 [2024-12-09 11:49:45.037617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:37.181 [2024-12-09 11:49:45.037625] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:37.181 [2024-12-09 11:49:45.037633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:37.181 [2024-12-09 11:49:45.037645] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:37.181 [2024-12-09 11:49:45.037648] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:37.181 [2024-12-09 11:49:45.037651] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:37.181 [2024-12-09 11:49:45.037653] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:37.181 [2024-12-09 11:49:45.037656] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:37.181 [2024-12-09 11:49:45.037660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:37.181 [2024-12-09 11:49:45.037666] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:37.181 [2024-12-09 11:49:45.037669] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:37.181 [2024-12-09 11:49:45.037671] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:37.181 [2024-12-09 11:49:45.037675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:37.181 [2024-12-09 11:49:45.037682] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:37.181 [2024-12-09 11:49:45.037685] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:37.181 [2024-12-09 11:49:45.037687] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:37.181 [2024-12-09 11:49:45.037691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:37.181 [2024-12-09 11:49:45.037697] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:37.181 [2024-12-09 11:49:45.037700] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:37.181 [2024-12-09 11:49:45.037702] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:37.181 [2024-12-09 11:49:45.037707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:37.181 [2024-12-09 11:49:45.037712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:37.181 [2024-12-09 11:49:45.037720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:37.181 [2024-12-09 11:49:45.037727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:37.181 [2024-12-09 11:49:45.037732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:37.181 ===================================================== 00:14:37.181 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:37.181 ===================================================== 00:14:37.181 Controller Capabilities/Features 00:14:37.181 ================================ 00:14:37.181 Vendor ID: 4e58 00:14:37.181 Subsystem Vendor ID: 4e58 00:14:37.181 Serial Number: SPDK1 00:14:37.181 Model Number: SPDK bdev Controller 00:14:37.181 Firmware Version: 25.01 00:14:37.181 Recommended Arb Burst: 6 00:14:37.181 IEEE OUI Identifier: 8d 6b 50 00:14:37.181 Multi-path I/O 00:14:37.181 May have multiple subsystem ports: Yes 00:14:37.181 May have multiple controllers: Yes 00:14:37.181 Associated with SR-IOV VF: No 00:14:37.181 Max Data Transfer Size: 131072 00:14:37.181 Max Number of Namespaces: 32 00:14:37.181 Max Number of I/O Queues: 127 00:14:37.181 NVMe Specification Version (VS): 1.3 00:14:37.181 NVMe Specification Version (Identify): 1.3 00:14:37.181 Maximum Queue Entries: 256 00:14:37.181 Contiguous Queues Required: Yes 00:14:37.181 Arbitration Mechanisms Supported 00:14:37.181 Weighted Round Robin: Not Supported 00:14:37.181 Vendor Specific: Not Supported 00:14:37.181 Reset Timeout: 15000 ms 00:14:37.181 Doorbell Stride: 4 bytes 00:14:37.181 NVM Subsystem Reset: Not Supported 00:14:37.181 Command Sets Supported 00:14:37.181 NVM Command Set: Supported 00:14:37.181 Boot Partition: Not Supported 00:14:37.181 Memory Page Size Minimum: 4096 bytes 00:14:37.181 Memory Page Size Maximum: 4096 bytes 00:14:37.181 Persistent Memory Region: Not Supported 00:14:37.181 Optional Asynchronous Events Supported 00:14:37.181 Namespace Attribute Notices: Supported 00:14:37.181 Firmware Activation Notices: Not Supported 00:14:37.181 ANA Change Notices: Not Supported 00:14:37.181 PLE Aggregate Log Change Notices: Not Supported 00:14:37.181 LBA Status Info Alert Notices: Not Supported 00:14:37.181 EGE Aggregate Log Change Notices: Not Supported 00:14:37.181 Normal NVM Subsystem Shutdown event: Not Supported 00:14:37.181 Zone Descriptor Change Notices: Not Supported 00:14:37.181 Discovery Log Change Notices: Not Supported 00:14:37.181 Controller Attributes 00:14:37.181 128-bit Host Identifier: Supported 00:14:37.181 Non-Operational Permissive Mode: Not Supported 00:14:37.181 NVM Sets: Not Supported 00:14:37.181 Read Recovery Levels: Not Supported 00:14:37.181 Endurance Groups: Not Supported 00:14:37.181 Predictable Latency Mode: Not Supported 00:14:37.181 Traffic Based Keep ALive: Not Supported 00:14:37.181 Namespace Granularity: Not Supported 00:14:37.181 SQ Associations: Not Supported 00:14:37.181 UUID List: Not Supported 00:14:37.181 Multi-Domain Subsystem: Not Supported 00:14:37.181 Fixed Capacity Management: Not Supported 00:14:37.181 Variable Capacity Management: Not Supported 00:14:37.182 Delete Endurance Group: Not Supported 00:14:37.182 Delete NVM Set: Not Supported 00:14:37.182 Extended LBA Formats Supported: Not Supported 00:14:37.182 Flexible Data Placement Supported: Not Supported 00:14:37.182 00:14:37.182 Controller Memory Buffer Support 00:14:37.182 ================================ 00:14:37.182 Supported: No 00:14:37.182 00:14:37.182 Persistent Memory Region Support 00:14:37.182 ================================ 00:14:37.182 Supported: No 00:14:37.182 00:14:37.182 Admin Command Set Attributes 00:14:37.182 ============================ 00:14:37.182 Security Send/Receive: Not Supported 00:14:37.182 Format NVM: Not Supported 00:14:37.182 Firmware Activate/Download: Not Supported 00:14:37.182 Namespace Management: Not Supported 00:14:37.182 Device Self-Test: Not Supported 00:14:37.182 Directives: Not Supported 00:14:37.182 NVMe-MI: Not Supported 00:14:37.182 Virtualization Management: Not Supported 00:14:37.182 Doorbell Buffer Config: Not Supported 00:14:37.182 Get LBA Status Capability: Not Supported 00:14:37.182 Command & Feature Lockdown Capability: Not Supported 00:14:37.182 Abort Command Limit: 4 00:14:37.182 Async Event Request Limit: 4 00:14:37.182 Number of Firmware Slots: N/A 00:14:37.182 Firmware Slot 1 Read-Only: N/A 00:14:37.182 Firmware Activation Without Reset: N/A 00:14:37.182 Multiple Update Detection Support: N/A 00:14:37.182 Firmware Update Granularity: No Information Provided 00:14:37.182 Per-Namespace SMART Log: No 00:14:37.182 Asymmetric Namespace Access Log Page: Not Supported 00:14:37.182 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:37.182 Command Effects Log Page: Supported 00:14:37.182 Get Log Page Extended Data: Supported 00:14:37.182 Telemetry Log Pages: Not Supported 00:14:37.182 Persistent Event Log Pages: Not Supported 00:14:37.182 Supported Log Pages Log Page: May Support 00:14:37.182 Commands Supported & Effects Log Page: Not Supported 00:14:37.182 Feature Identifiers & Effects Log Page:May Support 00:14:37.182 NVMe-MI Commands & Effects Log Page: May Support 00:14:37.182 Data Area 4 for Telemetry Log: Not Supported 00:14:37.182 Error Log Page Entries Supported: 128 00:14:37.182 Keep Alive: Supported 00:14:37.182 Keep Alive Granularity: 10000 ms 00:14:37.182 00:14:37.182 NVM Command Set Attributes 00:14:37.182 ========================== 00:14:37.182 Submission Queue Entry Size 00:14:37.182 Max: 64 00:14:37.182 Min: 64 00:14:37.182 Completion Queue Entry Size 00:14:37.182 Max: 16 00:14:37.182 Min: 16 00:14:37.182 Number of Namespaces: 32 00:14:37.182 Compare Command: Supported 00:14:37.182 Write Uncorrectable Command: Not Supported 00:14:37.182 Dataset Management Command: Supported 00:14:37.182 Write Zeroes Command: Supported 00:14:37.182 Set Features Save Field: Not Supported 00:14:37.182 Reservations: Not Supported 00:14:37.182 Timestamp: Not Supported 00:14:37.182 Copy: Supported 00:14:37.182 Volatile Write Cache: Present 00:14:37.182 Atomic Write Unit (Normal): 1 00:14:37.182 Atomic Write Unit (PFail): 1 00:14:37.182 Atomic Compare & Write Unit: 1 00:14:37.182 Fused Compare & Write: Supported 00:14:37.182 Scatter-Gather List 00:14:37.182 SGL Command Set: Supported (Dword aligned) 00:14:37.182 SGL Keyed: Not Supported 00:14:37.182 SGL Bit Bucket Descriptor: Not Supported 00:14:37.182 SGL Metadata Pointer: Not Supported 00:14:37.182 Oversized SGL: Not Supported 00:14:37.182 SGL Metadata Address: Not Supported 00:14:37.182 SGL Offset: Not Supported 00:14:37.182 Transport SGL Data Block: Not Supported 00:14:37.182 Replay Protected Memory Block: Not Supported 00:14:37.182 00:14:37.182 Firmware Slot Information 00:14:37.182 ========================= 00:14:37.182 Active slot: 1 00:14:37.182 Slot 1 Firmware Revision: 25.01 00:14:37.182 00:14:37.182 00:14:37.182 Commands Supported and Effects 00:14:37.182 ============================== 00:14:37.182 Admin Commands 00:14:37.182 -------------- 00:14:37.182 Get Log Page (02h): Supported 00:14:37.182 Identify (06h): Supported 00:14:37.182 Abort (08h): Supported 00:14:37.182 Set Features (09h): Supported 00:14:37.182 Get Features (0Ah): Supported 00:14:37.182 Asynchronous Event Request (0Ch): Supported 00:14:37.182 Keep Alive (18h): Supported 00:14:37.182 I/O Commands 00:14:37.182 ------------ 00:14:37.182 Flush (00h): Supported LBA-Change 00:14:37.182 Write (01h): Supported LBA-Change 00:14:37.182 Read (02h): Supported 00:14:37.182 Compare (05h): Supported 00:14:37.182 Write Zeroes (08h): Supported LBA-Change 00:14:37.182 Dataset Management (09h): Supported LBA-Change 00:14:37.182 Copy (19h): Supported LBA-Change 00:14:37.182 00:14:37.182 Error Log 00:14:37.182 ========= 00:14:37.182 00:14:37.182 Arbitration 00:14:37.182 =========== 00:14:37.182 Arbitration Burst: 1 00:14:37.182 00:14:37.182 Power Management 00:14:37.182 ================ 00:14:37.182 Number of Power States: 1 00:14:37.182 Current Power State: Power State #0 00:14:37.182 Power State #0: 00:14:37.182 Max Power: 0.00 W 00:14:37.182 Non-Operational State: Operational 00:14:37.182 Entry Latency: Not Reported 00:14:37.182 Exit Latency: Not Reported 00:14:37.182 Relative Read Throughput: 0 00:14:37.182 Relative Read Latency: 0 00:14:37.182 Relative Write Throughput: 0 00:14:37.182 Relative Write Latency: 0 00:14:37.182 Idle Power: Not Reported 00:14:37.182 Active Power: Not Reported 00:14:37.182 Non-Operational Permissive Mode: Not Supported 00:14:37.182 00:14:37.182 Health Information 00:14:37.182 ================== 00:14:37.182 Critical Warnings: 00:14:37.182 Available Spare Space: OK 00:14:37.182 Temperature: OK 00:14:37.182 Device Reliability: OK 00:14:37.182 Read Only: No 00:14:37.182 Volatile Memory Backup: OK 00:14:37.182 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:37.182 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:37.182 Available Spare: 0% 00:14:37.182 Available Sp[2024-12-09 11:49:45.037806] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:37.182 [2024-12-09 11:49:45.037816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:37.182 [2024-12-09 11:49:45.037839] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:37.182 [2024-12-09 11:49:45.037846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.182 [2024-12-09 11:49:45.037850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.182 [2024-12-09 11:49:45.037855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.182 [2024-12-09 11:49:45.037859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.182 [2024-12-09 11:49:45.038111] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:37.182 [2024-12-09 11:49:45.038119] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:37.182 [2024-12-09 11:49:45.039116] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:37.182 [2024-12-09 11:49:45.041647] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:37.182 [2024-12-09 11:49:45.041653] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:37.182 [2024-12-09 11:49:45.042133] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:37.182 [2024-12-09 11:49:45.042140] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:37.182 [2024-12-09 11:49:45.042194] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:37.182 [2024-12-09 11:49:45.043161] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:37.443 are Threshold: 0% 00:14:37.443 Life Percentage Used: 0% 00:14:37.443 Data Units Read: 0 00:14:37.443 Data Units Written: 0 00:14:37.443 Host Read Commands: 0 00:14:37.443 Host Write Commands: 0 00:14:37.443 Controller Busy Time: 0 minutes 00:14:37.443 Power Cycles: 0 00:14:37.443 Power On Hours: 0 hours 00:14:37.443 Unsafe Shutdowns: 0 00:14:37.443 Unrecoverable Media Errors: 0 00:14:37.443 Lifetime Error Log Entries: 0 00:14:37.443 Warning Temperature Time: 0 minutes 00:14:37.443 Critical Temperature Time: 0 minutes 00:14:37.443 00:14:37.443 Number of Queues 00:14:37.443 ================ 00:14:37.443 Number of I/O Submission Queues: 127 00:14:37.443 Number of I/O Completion Queues: 127 00:14:37.443 00:14:37.443 Active Namespaces 00:14:37.443 ================= 00:14:37.443 Namespace ID:1 00:14:37.443 Error Recovery Timeout: Unlimited 00:14:37.443 Command Set Identifier: NVM (00h) 00:14:37.443 Deallocate: Supported 00:14:37.443 Deallocated/Unwritten Error: Not Supported 00:14:37.443 Deallocated Read Value: Unknown 00:14:37.443 Deallocate in Write Zeroes: Not Supported 00:14:37.443 Deallocated Guard Field: 0xFFFF 00:14:37.443 Flush: Supported 00:14:37.443 Reservation: Supported 00:14:37.443 Namespace Sharing Capabilities: Multiple Controllers 00:14:37.443 Size (in LBAs): 131072 (0GiB) 00:14:37.443 Capacity (in LBAs): 131072 (0GiB) 00:14:37.443 Utilization (in LBAs): 131072 (0GiB) 00:14:37.443 NGUID: CE975AB2D417450AB0016EC9C89A940E 00:14:37.443 UUID: ce975ab2-d417-450a-b001-6ec9c89a940e 00:14:37.443 Thin Provisioning: Not Supported 00:14:37.443 Per-NS Atomic Units: Yes 00:14:37.443 Atomic Boundary Size (Normal): 0 00:14:37.443 Atomic Boundary Size (PFail): 0 00:14:37.443 Atomic Boundary Offset: 0 00:14:37.443 Maximum Single Source Range Length: 65535 00:14:37.443 Maximum Copy Length: 65535 00:14:37.443 Maximum Source Range Count: 1 00:14:37.443 NGUID/EUI64 Never Reused: No 00:14:37.443 Namespace Write Protected: No 00:14:37.443 Number of LBA Formats: 1 00:14:37.443 Current LBA Format: LBA Format #00 00:14:37.443 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:37.443 00:14:37.443 11:49:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:37.443 [2024-12-09 11:49:45.229322] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:42.730 Initializing NVMe Controllers 00:14:42.730 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:42.730 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:42.730 Initialization complete. Launching workers. 00:14:42.730 ======================================================== 00:14:42.730 Latency(us) 00:14:42.730 Device Information : IOPS MiB/s Average min max 00:14:42.730 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39926.80 155.96 3205.75 874.26 10749.70 00:14:42.730 ======================================================== 00:14:42.730 Total : 39926.80 155.96 3205.75 874.26 10749.70 00:14:42.730 00:14:42.730 [2024-12-09 11:49:50.245585] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:42.730 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:42.730 [2024-12-09 11:49:50.438451] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:48.019 Initializing NVMe Controllers 00:14:48.019 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:48.019 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:48.019 Initialization complete. Launching workers. 00:14:48.019 ======================================================== 00:14:48.019 Latency(us) 00:14:48.019 Device Information : IOPS MiB/s Average min max 00:14:48.019 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15964.48 62.36 8017.29 7627.12 11975.04 00:14:48.019 ======================================================== 00:14:48.019 Total : 15964.48 62.36 8017.29 7627.12 11975.04 00:14:48.019 00:14:48.019 [2024-12-09 11:49:55.473945] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:48.019 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:48.019 [2024-12-09 11:49:55.679798] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:53.306 [2024-12-09 11:50:00.773933] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:53.306 Initializing NVMe Controllers 00:14:53.306 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:53.306 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:53.306 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:53.306 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:53.306 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:53.306 Initialization complete. Launching workers. 00:14:53.306 Starting thread on core 2 00:14:53.306 Starting thread on core 3 00:14:53.306 Starting thread on core 1 00:14:53.306 11:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:53.306 [2024-12-09 11:50:01.022962] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:56.605 [2024-12-09 11:50:04.082945] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:56.605 Initializing NVMe Controllers 00:14:56.605 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:56.605 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:56.605 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:56.605 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:56.605 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:56.605 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:56.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:56.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:56.605 Initialization complete. Launching workers. 00:14:56.605 Starting thread on core 1 with urgent priority queue 00:14:56.605 Starting thread on core 2 with urgent priority queue 00:14:56.605 Starting thread on core 3 with urgent priority queue 00:14:56.605 Starting thread on core 0 with urgent priority queue 00:14:56.605 SPDK bdev Controller (SPDK1 ) core 0: 8135.00 IO/s 12.29 secs/100000 ios 00:14:56.605 SPDK bdev Controller (SPDK1 ) core 1: 9618.33 IO/s 10.40 secs/100000 ios 00:14:56.605 SPDK bdev Controller (SPDK1 ) core 2: 6505.67 IO/s 15.37 secs/100000 ios 00:14:56.605 SPDK bdev Controller (SPDK1 ) core 3: 10629.00 IO/s 9.41 secs/100000 ios 00:14:56.605 ======================================================== 00:14:56.605 00:14:56.605 11:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:56.605 [2024-12-09 11:50:04.318803] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:56.605 Initializing NVMe Controllers 00:14:56.605 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:56.605 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:56.605 Namespace ID: 1 size: 0GB 00:14:56.605 Initialization complete. 00:14:56.605 INFO: using host memory buffer for IO 00:14:56.605 Hello world! 00:14:56.605 [2024-12-09 11:50:04.354044] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:56.605 11:50:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:56.865 [2024-12-09 11:50:04.587158] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:57.810 Initializing NVMe Controllers 00:14:57.810 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:57.810 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:57.810 Initialization complete. Launching workers. 00:14:57.810 submit (in ns) avg, min, max = 5987.0, 2825.0, 3998349.2 00:14:57.810 complete (in ns) avg, min, max = 18922.7, 1650.8, 4993576.7 00:14:57.810 00:14:57.810 Submit histogram 00:14:57.810 ================ 00:14:57.810 Range in us Cumulative Count 00:14:57.810 2.813 - 2.827: 0.0051% ( 1) 00:14:57.810 2.827 - 2.840: 0.4343% ( 84) 00:14:57.810 2.840 - 2.853: 1.6708% ( 242) 00:14:57.810 2.853 - 2.867: 4.4605% ( 546) 00:14:57.810 2.867 - 2.880: 9.6413% ( 1014) 00:14:57.810 2.880 - 2.893: 14.9499% ( 1039) 00:14:57.810 2.893 - 2.907: 21.1578% ( 1215) 00:14:57.810 2.907 - 2.920: 28.3875% ( 1415) 00:14:57.810 2.920 - 2.933: 34.1100% ( 1120) 00:14:57.810 2.933 - 2.947: 38.9025% ( 938) 00:14:57.810 2.947 - 2.960: 44.0681% ( 1011) 00:14:57.810 2.960 - 2.973: 50.4701% ( 1253) 00:14:57.810 2.973 - 2.987: 57.7662% ( 1428) 00:14:57.810 2.987 - 3.000: 66.1097% ( 1633) 00:14:57.810 3.000 - 3.013: 73.9935% ( 1543) 00:14:57.810 3.013 - 3.027: 81.3662% ( 1443) 00:14:57.810 3.027 - 3.040: 87.7733% ( 1254) 00:14:57.810 3.040 - 3.053: 92.6323% ( 951) 00:14:57.810 3.053 - 3.067: 96.0965% ( 678) 00:14:57.810 3.067 - 3.080: 97.7979% ( 333) 00:14:57.810 3.080 - 3.093: 98.7584% ( 188) 00:14:57.810 3.093 - 3.107: 99.1876% ( 84) 00:14:57.810 3.107 - 3.120: 99.3767% ( 37) 00:14:57.810 3.120 - 3.133: 99.4993% ( 24) 00:14:57.810 3.133 - 3.147: 99.5299% ( 6) 00:14:57.810 3.147 - 3.160: 99.5453% ( 3) 00:14:57.810 3.160 - 3.173: 99.5555% ( 2) 00:14:57.810 3.413 - 3.440: 99.5657% ( 2) 00:14:57.810 3.440 - 3.467: 99.5759% ( 2) 00:14:57.810 3.653 - 3.680: 99.5810% ( 1) 00:14:57.810 3.787 - 3.813: 99.5861% ( 1) 00:14:57.810 3.920 - 3.947: 99.5913% ( 1) 00:14:57.810 3.973 - 4.000: 99.5964% ( 1) 00:14:57.810 4.187 - 4.213: 99.6015% ( 1) 00:14:57.810 4.400 - 4.427: 99.6066% ( 1) 00:14:57.810 4.453 - 4.480: 99.6168% ( 2) 00:14:57.810 4.533 - 4.560: 99.6219% ( 1) 00:14:57.810 4.640 - 4.667: 99.6270% ( 1) 00:14:57.810 4.667 - 4.693: 99.6423% ( 3) 00:14:57.810 4.720 - 4.747: 99.6577% ( 3) 00:14:57.810 4.747 - 4.773: 99.6628% ( 1) 00:14:57.810 4.773 - 4.800: 99.6679% ( 1) 00:14:57.810 4.880 - 4.907: 99.6730% ( 1) 00:14:57.810 4.907 - 4.933: 99.6832% ( 2) 00:14:57.810 4.933 - 4.960: 99.6883% ( 1) 00:14:57.810 4.960 - 4.987: 99.6934% ( 1) 00:14:57.810 4.987 - 5.013: 99.6985% ( 1) 00:14:57.810 5.013 - 5.040: 99.7037% ( 1) 00:14:57.810 5.067 - 5.093: 99.7088% ( 1) 00:14:57.810 5.093 - 5.120: 99.7190% ( 2) 00:14:57.810 5.120 - 5.147: 99.7241% ( 1) 00:14:57.810 5.173 - 5.200: 99.7292% ( 1) 00:14:57.810 5.280 - 5.307: 99.7343% ( 1) 00:14:57.810 5.307 - 5.333: 99.7394% ( 1) 00:14:57.810 5.333 - 5.360: 99.7445% ( 1) 00:14:57.810 5.360 - 5.387: 99.7496% ( 1) 00:14:57.810 5.440 - 5.467: 99.7548% ( 1) 00:14:57.810 5.467 - 5.493: 99.7599% ( 1) 00:14:57.810 5.493 - 5.520: 99.7701% ( 2) 00:14:57.810 5.520 - 5.547: 99.7803% ( 2) 00:14:57.810 5.680 - 5.707: 99.7956% ( 3) 00:14:57.810 5.840 - 5.867: 99.8007% ( 1) 00:14:57.810 5.867 - 5.893: 99.8058% ( 1) 00:14:57.810 5.893 - 5.920: 99.8110% ( 1) 00:14:57.810 5.920 - 5.947: 99.8212% ( 2) 00:14:57.810 6.000 - 6.027: 99.8365% ( 3) 00:14:57.810 6.027 - 6.053: 99.8467% ( 2) 00:14:57.810 6.133 - 6.160: 99.8569% ( 2) 00:14:57.810 6.160 - 6.187: 99.8620% ( 1) 00:14:57.810 6.267 - 6.293: 99.8672% ( 1) 00:14:57.810 6.347 - 6.373: 99.8723% ( 1) 00:14:57.810 6.533 - 6.560: 99.8774% ( 1) 00:14:57.810 6.667 - 6.693: 99.8825% ( 1) 00:14:57.810 6.720 - 6.747: 99.8927% ( 2) 00:14:57.810 6.773 - 6.800: 99.8978% ( 1) 00:14:57.810 6.880 - 6.933: 99.9029% ( 1) 00:14:57.810 6.933 - 6.987: 99.9080% ( 1) 00:14:57.810 9.440 - 9.493: 99.9131% ( 1) 00:14:57.810 12.587 - 12.640: 99.9183% ( 1) 00:14:57.810 [2024-12-09 11:50:05.605939] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:57.810 12.800 - 12.853: 99.9234% ( 1) 00:14:57.810 3017.387 - 3031.040: 99.9285% ( 1) 00:14:57.810 3986.773 - 4014.080: 100.0000% ( 14) 00:14:57.810 00:14:57.810 Complete histogram 00:14:57.810 ================== 00:14:57.810 Range in us Cumulative Count 00:14:57.810 1.647 - 1.653: 0.3372% ( 66) 00:14:57.810 1.653 - 1.660: 2.0999% ( 345) 00:14:57.810 1.660 - 1.667: 2.2737% ( 34) 00:14:57.810 1.667 - 1.673: 2.6620% ( 76) 00:14:57.810 1.673 - 1.680: 3.0247% ( 71) 00:14:57.810 1.680 - 1.687: 3.1116% ( 17) 00:14:57.810 1.687 - 1.693: 3.2036% ( 18) 00:14:57.810 1.693 - 1.700: 3.3160% ( 22) 00:14:57.810 1.700 - 1.707: 23.5234% ( 3955) 00:14:57.810 1.707 - 1.720: 48.9373% ( 4974) 00:14:57.810 1.720 - 1.733: 75.8022% ( 5258) 00:14:57.810 1.733 - 1.747: 83.3895% ( 1485) 00:14:57.810 1.747 - 1.760: 84.5494% ( 227) 00:14:57.810 1.760 - 1.773: 87.8296% ( 642) 00:14:57.810 1.773 - 1.787: 92.4893% ( 912) 00:14:57.810 1.787 - 1.800: 96.7249% ( 829) 00:14:57.810 1.800 - 1.813: 98.7584% ( 398) 00:14:57.810 1.813 - 1.827: 99.3307% ( 112) 00:14:57.810 1.827 - 1.840: 99.4278% ( 19) 00:14:57.810 1.840 - 1.853: 99.4482% ( 4) 00:14:57.810 1.867 - 1.880: 99.4584% ( 2) 00:14:57.810 3.413 - 3.440: 99.4635% ( 1) 00:14:57.810 3.467 - 3.493: 99.4686% ( 1) 00:14:57.810 3.547 - 3.573: 99.4737% ( 1) 00:14:57.810 3.920 - 3.947: 99.4788% ( 1) 00:14:57.810 3.973 - 4.000: 99.4840% ( 1) 00:14:57.810 4.027 - 4.053: 99.4891% ( 1) 00:14:57.810 4.080 - 4.107: 99.5044% ( 3) 00:14:57.810 4.133 - 4.160: 99.5095% ( 1) 00:14:57.810 4.187 - 4.213: 99.5146% ( 1) 00:14:57.810 4.453 - 4.480: 99.5197% ( 1) 00:14:57.810 4.613 - 4.640: 99.5248% ( 1) 00:14:57.810 4.667 - 4.693: 99.5299% ( 1) 00:14:57.810 4.693 - 4.720: 99.5402% ( 2) 00:14:57.810 4.880 - 4.907: 99.5555% ( 3) 00:14:57.810 4.987 - 5.013: 99.5606% ( 1) 00:14:57.810 5.040 - 5.067: 99.5657% ( 1) 00:14:57.810 10.293 - 10.347: 99.5708% ( 1) 00:14:57.810 3986.773 - 4014.080: 99.9949% ( 83) 00:14:57.810 4969.813 - 4997.120: 100.0000% ( 1) 00:14:57.810 00:14:57.810 11:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:57.810 11:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:57.810 11:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:57.811 11:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:57.811 11:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:58.072 [ 00:14:58.072 { 00:14:58.072 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:58.072 "subtype": "Discovery", 00:14:58.072 "listen_addresses": [], 00:14:58.072 "allow_any_host": true, 00:14:58.072 "hosts": [] 00:14:58.072 }, 00:14:58.072 { 00:14:58.072 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:58.072 "subtype": "NVMe", 00:14:58.072 "listen_addresses": [ 00:14:58.072 { 00:14:58.072 "trtype": "VFIOUSER", 00:14:58.072 "adrfam": "IPv4", 00:14:58.072 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:58.072 "trsvcid": "0" 00:14:58.072 } 00:14:58.072 ], 00:14:58.072 "allow_any_host": true, 00:14:58.072 "hosts": [], 00:14:58.072 "serial_number": "SPDK1", 00:14:58.072 "model_number": "SPDK bdev Controller", 00:14:58.072 "max_namespaces": 32, 00:14:58.072 "min_cntlid": 1, 00:14:58.072 "max_cntlid": 65519, 00:14:58.072 "namespaces": [ 00:14:58.072 { 00:14:58.072 "nsid": 1, 00:14:58.072 "bdev_name": "Malloc1", 00:14:58.072 "name": "Malloc1", 00:14:58.072 "nguid": "CE975AB2D417450AB0016EC9C89A940E", 00:14:58.072 "uuid": "ce975ab2-d417-450a-b001-6ec9c89a940e" 00:14:58.072 } 00:14:58.072 ] 00:14:58.072 }, 00:14:58.072 { 00:14:58.072 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:58.072 "subtype": "NVMe", 00:14:58.072 "listen_addresses": [ 00:14:58.072 { 00:14:58.072 "trtype": "VFIOUSER", 00:14:58.072 "adrfam": "IPv4", 00:14:58.072 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:58.072 "trsvcid": "0" 00:14:58.072 } 00:14:58.072 ], 00:14:58.072 "allow_any_host": true, 00:14:58.072 "hosts": [], 00:14:58.072 "serial_number": "SPDK2", 00:14:58.072 "model_number": "SPDK bdev Controller", 00:14:58.072 "max_namespaces": 32, 00:14:58.072 "min_cntlid": 1, 00:14:58.072 "max_cntlid": 65519, 00:14:58.072 "namespaces": [ 00:14:58.072 { 00:14:58.072 "nsid": 1, 00:14:58.072 "bdev_name": "Malloc2", 00:14:58.072 "name": "Malloc2", 00:14:58.072 "nguid": "14E5C4596BC647D787C7BE0E40CBA048", 00:14:58.072 "uuid": "14e5c459-6bc6-47d7-87c7-be0e40cba048" 00:14:58.072 } 00:14:58.072 ] 00:14:58.072 } 00:14:58.072 ] 00:14:58.072 11:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:58.072 11:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=453 00:14:58.072 11:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:58.072 11:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:58.072 11:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:58.072 11:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:58.072 11:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:58.072 11:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:58.072 11:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:58.072 11:50:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:58.333 [2024-12-09 11:50:05.989000] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:58.333 Malloc3 00:14:58.333 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:58.333 [2024-12-09 11:50:06.169280] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:58.333 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:58.333 Asynchronous Event Request test 00:14:58.333 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:58.333 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:58.333 Registering asynchronous event callbacks... 00:14:58.333 Starting namespace attribute notice tests for all controllers... 00:14:58.333 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:58.333 aer_cb - Changed Namespace 00:14:58.333 Cleaning up... 00:14:58.594 [ 00:14:58.594 { 00:14:58.594 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:58.594 "subtype": "Discovery", 00:14:58.594 "listen_addresses": [], 00:14:58.594 "allow_any_host": true, 00:14:58.594 "hosts": [] 00:14:58.594 }, 00:14:58.594 { 00:14:58.594 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:58.594 "subtype": "NVMe", 00:14:58.594 "listen_addresses": [ 00:14:58.594 { 00:14:58.594 "trtype": "VFIOUSER", 00:14:58.594 "adrfam": "IPv4", 00:14:58.594 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:58.594 "trsvcid": "0" 00:14:58.594 } 00:14:58.594 ], 00:14:58.594 "allow_any_host": true, 00:14:58.594 "hosts": [], 00:14:58.594 "serial_number": "SPDK1", 00:14:58.594 "model_number": "SPDK bdev Controller", 00:14:58.594 "max_namespaces": 32, 00:14:58.594 "min_cntlid": 1, 00:14:58.594 "max_cntlid": 65519, 00:14:58.594 "namespaces": [ 00:14:58.594 { 00:14:58.594 "nsid": 1, 00:14:58.594 "bdev_name": "Malloc1", 00:14:58.594 "name": "Malloc1", 00:14:58.594 "nguid": "CE975AB2D417450AB0016EC9C89A940E", 00:14:58.594 "uuid": "ce975ab2-d417-450a-b001-6ec9c89a940e" 00:14:58.594 }, 00:14:58.594 { 00:14:58.594 "nsid": 2, 00:14:58.594 "bdev_name": "Malloc3", 00:14:58.594 "name": "Malloc3", 00:14:58.594 "nguid": "04B0FFB1974D437882049C55E7492614", 00:14:58.594 "uuid": "04b0ffb1-974d-4378-8204-9c55e7492614" 00:14:58.594 } 00:14:58.594 ] 00:14:58.594 }, 00:14:58.594 { 00:14:58.594 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:58.594 "subtype": "NVMe", 00:14:58.594 "listen_addresses": [ 00:14:58.594 { 00:14:58.594 "trtype": "VFIOUSER", 00:14:58.594 "adrfam": "IPv4", 00:14:58.594 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:58.594 "trsvcid": "0" 00:14:58.594 } 00:14:58.594 ], 00:14:58.594 "allow_any_host": true, 00:14:58.594 "hosts": [], 00:14:58.594 "serial_number": "SPDK2", 00:14:58.594 "model_number": "SPDK bdev Controller", 00:14:58.594 "max_namespaces": 32, 00:14:58.594 "min_cntlid": 1, 00:14:58.594 "max_cntlid": 65519, 00:14:58.594 "namespaces": [ 00:14:58.594 { 00:14:58.594 "nsid": 1, 00:14:58.594 "bdev_name": "Malloc2", 00:14:58.594 "name": "Malloc2", 00:14:58.594 "nguid": "14E5C4596BC647D787C7BE0E40CBA048", 00:14:58.594 "uuid": "14e5c459-6bc6-47d7-87c7-be0e40cba048" 00:14:58.594 } 00:14:58.594 ] 00:14:58.594 } 00:14:58.594 ] 00:14:58.594 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 453 00:14:58.594 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:58.594 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:58.594 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:58.594 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:58.594 [2024-12-09 11:50:06.401227] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:14:58.594 [2024-12-09 11:50:06.401273] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid500 ] 00:14:58.594 [2024-12-09 11:50:06.439853] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:58.594 [2024-12-09 11:50:06.445057] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:58.594 [2024-12-09 11:50:06.445076] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0a55c5a000 00:14:58.594 [2024-12-09 11:50:06.446055] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:58.594 [2024-12-09 11:50:06.447064] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:58.594 [2024-12-09 11:50:06.448067] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:58.594 [2024-12-09 11:50:06.449072] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:58.594 [2024-12-09 11:50:06.450077] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:58.594 [2024-12-09 11:50:06.451082] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:58.594 [2024-12-09 11:50:06.452090] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:58.594 [2024-12-09 11:50:06.453096] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:58.594 [2024-12-09 11:50:06.454102] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:58.594 [2024-12-09 11:50:06.454110] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0a55c4f000 00:14:58.594 [2024-12-09 11:50:06.455021] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:58.594 [2024-12-09 11:50:06.464392] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:58.594 [2024-12-09 11:50:06.464413] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:58.594 [2024-12-09 11:50:06.469490] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:58.594 [2024-12-09 11:50:06.469527] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:58.594 [2024-12-09 11:50:06.469587] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:58.594 [2024-12-09 11:50:06.469598] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:58.594 [2024-12-09 11:50:06.469602] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:58.594 [2024-12-09 11:50:06.470500] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:58.594 [2024-12-09 11:50:06.470508] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:58.595 [2024-12-09 11:50:06.470514] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:58.595 [2024-12-09 11:50:06.471503] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:58.595 [2024-12-09 11:50:06.471509] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:58.595 [2024-12-09 11:50:06.471515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:58.595 [2024-12-09 11:50:06.472510] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:58.595 [2024-12-09 11:50:06.472516] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:58.595 [2024-12-09 11:50:06.473516] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:58.595 [2024-12-09 11:50:06.473523] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:58.595 [2024-12-09 11:50:06.473526] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:58.595 [2024-12-09 11:50:06.473531] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:58.595 [2024-12-09 11:50:06.473640] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:58.595 [2024-12-09 11:50:06.473646] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:58.595 [2024-12-09 11:50:06.473649] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:58.595 [2024-12-09 11:50:06.474527] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:58.595 [2024-12-09 11:50:06.475529] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:58.595 [2024-12-09 11:50:06.476534] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:58.595 [2024-12-09 11:50:06.477539] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:58.595 [2024-12-09 11:50:06.477569] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:58.595 [2024-12-09 11:50:06.478553] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:58.595 [2024-12-09 11:50:06.478559] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:58.595 [2024-12-09 11:50:06.478563] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:58.595 [2024-12-09 11:50:06.478578] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:58.595 [2024-12-09 11:50:06.478588] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:58.595 [2024-12-09 11:50:06.478599] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:58.595 [2024-12-09 11:50:06.478602] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:58.595 [2024-12-09 11:50:06.478605] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:58.595 [2024-12-09 11:50:06.478615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:58.857 [2024-12-09 11:50:06.486643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:58.857 [2024-12-09 11:50:06.486654] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:58.857 [2024-12-09 11:50:06.486658] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:58.857 [2024-12-09 11:50:06.486662] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:58.857 [2024-12-09 11:50:06.486665] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:58.857 [2024-12-09 11:50:06.486668] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:58.857 [2024-12-09 11:50:06.486672] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:58.857 [2024-12-09 11:50:06.486675] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:58.857 [2024-12-09 11:50:06.486681] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:58.857 [2024-12-09 11:50:06.486690] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:58.857 [2024-12-09 11:50:06.494641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:58.857 [2024-12-09 11:50:06.494651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.858 [2024-12-09 11:50:06.494657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.858 [2024-12-09 11:50:06.494663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.858 [2024-12-09 11:50:06.494669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.858 [2024-12-09 11:50:06.494673] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:58.858 [2024-12-09 11:50:06.494679] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:58.858 [2024-12-09 11:50:06.494686] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:58.858 [2024-12-09 11:50:06.502641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:58.858 [2024-12-09 11:50:06.502646] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:58.858 [2024-12-09 11:50:06.502650] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:58.858 [2024-12-09 11:50:06.502655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:58.858 [2024-12-09 11:50:06.502659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:58.858 [2024-12-09 11:50:06.502666] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:58.858 [2024-12-09 11:50:06.510641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:58.858 [2024-12-09 11:50:06.510688] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:58.858 [2024-12-09 11:50:06.510693] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:58.858 [2024-12-09 11:50:06.510699] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:58.858 [2024-12-09 11:50:06.510702] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:58.858 [2024-12-09 11:50:06.510704] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:58.858 [2024-12-09 11:50:06.510709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:58.858 [2024-12-09 11:50:06.518641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:58.858 [2024-12-09 11:50:06.518649] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:58.858 [2024-12-09 11:50:06.518658] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:58.858 [2024-12-09 11:50:06.518665] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:58.858 [2024-12-09 11:50:06.518670] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:58.858 [2024-12-09 11:50:06.518673] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:58.858 [2024-12-09 11:50:06.518676] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:58.858 [2024-12-09 11:50:06.518680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:58.858 [2024-12-09 11:50:06.526642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:58.858 [2024-12-09 11:50:06.526653] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:58.858 [2024-12-09 11:50:06.526658] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:58.858 [2024-12-09 11:50:06.526664] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:58.858 [2024-12-09 11:50:06.526667] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:58.858 [2024-12-09 11:50:06.526669] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:58.858 [2024-12-09 11:50:06.526674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:58.858 [2024-12-09 11:50:06.534642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:58.858 [2024-12-09 11:50:06.534648] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:58.858 [2024-12-09 11:50:06.534653] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:58.858 [2024-12-09 11:50:06.534660] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:58.858 [2024-12-09 11:50:06.534665] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:58.858 [2024-12-09 11:50:06.534669] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:58.858 [2024-12-09 11:50:06.534673] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:58.858 [2024-12-09 11:50:06.534676] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:58.858 [2024-12-09 11:50:06.534679] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:58.858 [2024-12-09 11:50:06.534683] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:58.858 [2024-12-09 11:50:06.534697] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:58.858 [2024-12-09 11:50:06.542640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:58.858 [2024-12-09 11:50:06.542657] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:58.858 [2024-12-09 11:50:06.550640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:58.858 [2024-12-09 11:50:06.550650] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:58.858 [2024-12-09 11:50:06.555725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:58.858 [2024-12-09 11:50:06.555736] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:58.858 [2024-12-09 11:50:06.566641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:58.858 [2024-12-09 11:50:06.566656] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:58.858 [2024-12-09 11:50:06.566659] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:58.858 [2024-12-09 11:50:06.566662] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:58.858 [2024-12-09 11:50:06.566664] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:58.858 [2024-12-09 11:50:06.566667] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:58.858 [2024-12-09 11:50:06.566671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:58.858 [2024-12-09 11:50:06.566677] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:58.858 [2024-12-09 11:50:06.566680] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:58.858 [2024-12-09 11:50:06.566682] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:58.858 [2024-12-09 11:50:06.566687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:58.858 [2024-12-09 11:50:06.566692] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:58.858 [2024-12-09 11:50:06.566695] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:58.858 [2024-12-09 11:50:06.566697] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:58.858 [2024-12-09 11:50:06.566701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:58.858 [2024-12-09 11:50:06.566707] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:58.858 [2024-12-09 11:50:06.566710] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:58.858 [2024-12-09 11:50:06.566712] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:58.858 [2024-12-09 11:50:06.566717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:58.858 [2024-12-09 11:50:06.574642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:58.858 [2024-12-09 11:50:06.574653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:58.858 [2024-12-09 11:50:06.574661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:58.858 [2024-12-09 11:50:06.574666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:58.858 ===================================================== 00:14:58.858 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:58.858 ===================================================== 00:14:58.858 Controller Capabilities/Features 00:14:58.858 ================================ 00:14:58.858 Vendor ID: 4e58 00:14:58.858 Subsystem Vendor ID: 4e58 00:14:58.858 Serial Number: SPDK2 00:14:58.858 Model Number: SPDK bdev Controller 00:14:58.858 Firmware Version: 25.01 00:14:58.858 Recommended Arb Burst: 6 00:14:58.858 IEEE OUI Identifier: 8d 6b 50 00:14:58.858 Multi-path I/O 00:14:58.858 May have multiple subsystem ports: Yes 00:14:58.858 May have multiple controllers: Yes 00:14:58.858 Associated with SR-IOV VF: No 00:14:58.858 Max Data Transfer Size: 131072 00:14:58.858 Max Number of Namespaces: 32 00:14:58.858 Max Number of I/O Queues: 127 00:14:58.858 NVMe Specification Version (VS): 1.3 00:14:58.859 NVMe Specification Version (Identify): 1.3 00:14:58.859 Maximum Queue Entries: 256 00:14:58.859 Contiguous Queues Required: Yes 00:14:58.859 Arbitration Mechanisms Supported 00:14:58.859 Weighted Round Robin: Not Supported 00:14:58.859 Vendor Specific: Not Supported 00:14:58.859 Reset Timeout: 15000 ms 00:14:58.859 Doorbell Stride: 4 bytes 00:14:58.859 NVM Subsystem Reset: Not Supported 00:14:58.859 Command Sets Supported 00:14:58.859 NVM Command Set: Supported 00:14:58.859 Boot Partition: Not Supported 00:14:58.859 Memory Page Size Minimum: 4096 bytes 00:14:58.859 Memory Page Size Maximum: 4096 bytes 00:14:58.859 Persistent Memory Region: Not Supported 00:14:58.859 Optional Asynchronous Events Supported 00:14:58.859 Namespace Attribute Notices: Supported 00:14:58.859 Firmware Activation Notices: Not Supported 00:14:58.859 ANA Change Notices: Not Supported 00:14:58.859 PLE Aggregate Log Change Notices: Not Supported 00:14:58.859 LBA Status Info Alert Notices: Not Supported 00:14:58.859 EGE Aggregate Log Change Notices: Not Supported 00:14:58.859 Normal NVM Subsystem Shutdown event: Not Supported 00:14:58.859 Zone Descriptor Change Notices: Not Supported 00:14:58.859 Discovery Log Change Notices: Not Supported 00:14:58.859 Controller Attributes 00:14:58.859 128-bit Host Identifier: Supported 00:14:58.859 Non-Operational Permissive Mode: Not Supported 00:14:58.859 NVM Sets: Not Supported 00:14:58.859 Read Recovery Levels: Not Supported 00:14:58.859 Endurance Groups: Not Supported 00:14:58.859 Predictable Latency Mode: Not Supported 00:14:58.859 Traffic Based Keep ALive: Not Supported 00:14:58.859 Namespace Granularity: Not Supported 00:14:58.859 SQ Associations: Not Supported 00:14:58.859 UUID List: Not Supported 00:14:58.859 Multi-Domain Subsystem: Not Supported 00:14:58.859 Fixed Capacity Management: Not Supported 00:14:58.859 Variable Capacity Management: Not Supported 00:14:58.859 Delete Endurance Group: Not Supported 00:14:58.859 Delete NVM Set: Not Supported 00:14:58.859 Extended LBA Formats Supported: Not Supported 00:14:58.859 Flexible Data Placement Supported: Not Supported 00:14:58.859 00:14:58.859 Controller Memory Buffer Support 00:14:58.859 ================================ 00:14:58.859 Supported: No 00:14:58.859 00:14:58.859 Persistent Memory Region Support 00:14:58.859 ================================ 00:14:58.859 Supported: No 00:14:58.859 00:14:58.859 Admin Command Set Attributes 00:14:58.859 ============================ 00:14:58.859 Security Send/Receive: Not Supported 00:14:58.859 Format NVM: Not Supported 00:14:58.859 Firmware Activate/Download: Not Supported 00:14:58.859 Namespace Management: Not Supported 00:14:58.859 Device Self-Test: Not Supported 00:14:58.859 Directives: Not Supported 00:14:58.859 NVMe-MI: Not Supported 00:14:58.859 Virtualization Management: Not Supported 00:14:58.859 Doorbell Buffer Config: Not Supported 00:14:58.859 Get LBA Status Capability: Not Supported 00:14:58.859 Command & Feature Lockdown Capability: Not Supported 00:14:58.859 Abort Command Limit: 4 00:14:58.859 Async Event Request Limit: 4 00:14:58.859 Number of Firmware Slots: N/A 00:14:58.859 Firmware Slot 1 Read-Only: N/A 00:14:58.859 Firmware Activation Without Reset: N/A 00:14:58.859 Multiple Update Detection Support: N/A 00:14:58.859 Firmware Update Granularity: No Information Provided 00:14:58.859 Per-Namespace SMART Log: No 00:14:58.859 Asymmetric Namespace Access Log Page: Not Supported 00:14:58.859 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:58.859 Command Effects Log Page: Supported 00:14:58.859 Get Log Page Extended Data: Supported 00:14:58.859 Telemetry Log Pages: Not Supported 00:14:58.859 Persistent Event Log Pages: Not Supported 00:14:58.859 Supported Log Pages Log Page: May Support 00:14:58.859 Commands Supported & Effects Log Page: Not Supported 00:14:58.859 Feature Identifiers & Effects Log Page:May Support 00:14:58.859 NVMe-MI Commands & Effects Log Page: May Support 00:14:58.859 Data Area 4 for Telemetry Log: Not Supported 00:14:58.859 Error Log Page Entries Supported: 128 00:14:58.859 Keep Alive: Supported 00:14:58.859 Keep Alive Granularity: 10000 ms 00:14:58.859 00:14:58.859 NVM Command Set Attributes 00:14:58.859 ========================== 00:14:58.859 Submission Queue Entry Size 00:14:58.859 Max: 64 00:14:58.859 Min: 64 00:14:58.859 Completion Queue Entry Size 00:14:58.859 Max: 16 00:14:58.859 Min: 16 00:14:58.859 Number of Namespaces: 32 00:14:58.859 Compare Command: Supported 00:14:58.859 Write Uncorrectable Command: Not Supported 00:14:58.859 Dataset Management Command: Supported 00:14:58.859 Write Zeroes Command: Supported 00:14:58.859 Set Features Save Field: Not Supported 00:14:58.859 Reservations: Not Supported 00:14:58.859 Timestamp: Not Supported 00:14:58.859 Copy: Supported 00:14:58.859 Volatile Write Cache: Present 00:14:58.859 Atomic Write Unit (Normal): 1 00:14:58.859 Atomic Write Unit (PFail): 1 00:14:58.859 Atomic Compare & Write Unit: 1 00:14:58.859 Fused Compare & Write: Supported 00:14:58.859 Scatter-Gather List 00:14:58.859 SGL Command Set: Supported (Dword aligned) 00:14:58.859 SGL Keyed: Not Supported 00:14:58.859 SGL Bit Bucket Descriptor: Not Supported 00:14:58.859 SGL Metadata Pointer: Not Supported 00:14:58.859 Oversized SGL: Not Supported 00:14:58.859 SGL Metadata Address: Not Supported 00:14:58.859 SGL Offset: Not Supported 00:14:58.859 Transport SGL Data Block: Not Supported 00:14:58.859 Replay Protected Memory Block: Not Supported 00:14:58.859 00:14:58.859 Firmware Slot Information 00:14:58.859 ========================= 00:14:58.859 Active slot: 1 00:14:58.859 Slot 1 Firmware Revision: 25.01 00:14:58.859 00:14:58.859 00:14:58.859 Commands Supported and Effects 00:14:58.859 ============================== 00:14:58.859 Admin Commands 00:14:58.859 -------------- 00:14:58.859 Get Log Page (02h): Supported 00:14:58.859 Identify (06h): Supported 00:14:58.859 Abort (08h): Supported 00:14:58.859 Set Features (09h): Supported 00:14:58.859 Get Features (0Ah): Supported 00:14:58.859 Asynchronous Event Request (0Ch): Supported 00:14:58.859 Keep Alive (18h): Supported 00:14:58.859 I/O Commands 00:14:58.859 ------------ 00:14:58.859 Flush (00h): Supported LBA-Change 00:14:58.859 Write (01h): Supported LBA-Change 00:14:58.859 Read (02h): Supported 00:14:58.859 Compare (05h): Supported 00:14:58.859 Write Zeroes (08h): Supported LBA-Change 00:14:58.859 Dataset Management (09h): Supported LBA-Change 00:14:58.859 Copy (19h): Supported LBA-Change 00:14:58.859 00:14:58.859 Error Log 00:14:58.859 ========= 00:14:58.859 00:14:58.859 Arbitration 00:14:58.859 =========== 00:14:58.859 Arbitration Burst: 1 00:14:58.859 00:14:58.859 Power Management 00:14:58.859 ================ 00:14:58.859 Number of Power States: 1 00:14:58.859 Current Power State: Power State #0 00:14:58.859 Power State #0: 00:14:58.859 Max Power: 0.00 W 00:14:58.859 Non-Operational State: Operational 00:14:58.859 Entry Latency: Not Reported 00:14:58.859 Exit Latency: Not Reported 00:14:58.859 Relative Read Throughput: 0 00:14:58.859 Relative Read Latency: 0 00:14:58.859 Relative Write Throughput: 0 00:14:58.859 Relative Write Latency: 0 00:14:58.859 Idle Power: Not Reported 00:14:58.859 Active Power: Not Reported 00:14:58.859 Non-Operational Permissive Mode: Not Supported 00:14:58.859 00:14:58.859 Health Information 00:14:58.859 ================== 00:14:58.859 Critical Warnings: 00:14:58.859 Available Spare Space: OK 00:14:58.859 Temperature: OK 00:14:58.859 Device Reliability: OK 00:14:58.859 Read Only: No 00:14:58.859 Volatile Memory Backup: OK 00:14:58.859 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:58.859 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:58.859 Available Spare: 0% 00:14:58.859 Available Sp[2024-12-09 11:50:06.574741] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:58.859 [2024-12-09 11:50:06.582643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:58.859 [2024-12-09 11:50:06.582670] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:58.859 [2024-12-09 11:50:06.582677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.859 [2024-12-09 11:50:06.582682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.859 [2024-12-09 11:50:06.582686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.859 [2024-12-09 11:50:06.582690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.859 [2024-12-09 11:50:06.582719] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:58.859 [2024-12-09 11:50:06.582727] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:58.859 [2024-12-09 11:50:06.583726] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:58.859 [2024-12-09 11:50:06.583763] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:58.860 [2024-12-09 11:50:06.583768] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:58.860 [2024-12-09 11:50:06.584734] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:58.860 [2024-12-09 11:50:06.584743] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:58.860 [2024-12-09 11:50:06.584785] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:58.860 [2024-12-09 11:50:06.585755] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:58.860 are Threshold: 0% 00:14:58.860 Life Percentage Used: 0% 00:14:58.860 Data Units Read: 0 00:14:58.860 Data Units Written: 0 00:14:58.860 Host Read Commands: 0 00:14:58.860 Host Write Commands: 0 00:14:58.860 Controller Busy Time: 0 minutes 00:14:58.860 Power Cycles: 0 00:14:58.860 Power On Hours: 0 hours 00:14:58.860 Unsafe Shutdowns: 0 00:14:58.860 Unrecoverable Media Errors: 0 00:14:58.860 Lifetime Error Log Entries: 0 00:14:58.860 Warning Temperature Time: 0 minutes 00:14:58.860 Critical Temperature Time: 0 minutes 00:14:58.860 00:14:58.860 Number of Queues 00:14:58.860 ================ 00:14:58.860 Number of I/O Submission Queues: 127 00:14:58.860 Number of I/O Completion Queues: 127 00:14:58.860 00:14:58.860 Active Namespaces 00:14:58.860 ================= 00:14:58.860 Namespace ID:1 00:14:58.860 Error Recovery Timeout: Unlimited 00:14:58.860 Command Set Identifier: NVM (00h) 00:14:58.860 Deallocate: Supported 00:14:58.860 Deallocated/Unwritten Error: Not Supported 00:14:58.860 Deallocated Read Value: Unknown 00:14:58.860 Deallocate in Write Zeroes: Not Supported 00:14:58.860 Deallocated Guard Field: 0xFFFF 00:14:58.860 Flush: Supported 00:14:58.860 Reservation: Supported 00:14:58.860 Namespace Sharing Capabilities: Multiple Controllers 00:14:58.860 Size (in LBAs): 131072 (0GiB) 00:14:58.860 Capacity (in LBAs): 131072 (0GiB) 00:14:58.860 Utilization (in LBAs): 131072 (0GiB) 00:14:58.860 NGUID: 14E5C4596BC647D787C7BE0E40CBA048 00:14:58.860 UUID: 14e5c459-6bc6-47d7-87c7-be0e40cba048 00:14:58.860 Thin Provisioning: Not Supported 00:14:58.860 Per-NS Atomic Units: Yes 00:14:58.860 Atomic Boundary Size (Normal): 0 00:14:58.860 Atomic Boundary Size (PFail): 0 00:14:58.860 Atomic Boundary Offset: 0 00:14:58.860 Maximum Single Source Range Length: 65535 00:14:58.860 Maximum Copy Length: 65535 00:14:58.860 Maximum Source Range Count: 1 00:14:58.860 NGUID/EUI64 Never Reused: No 00:14:58.860 Namespace Write Protected: No 00:14:58.860 Number of LBA Formats: 1 00:14:58.860 Current LBA Format: LBA Format #00 00:14:58.860 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:58.860 00:14:58.860 11:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:59.121 [2024-12-09 11:50:06.773707] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:04.409 Initializing NVMe Controllers 00:15:04.409 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:04.409 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:04.409 Initialization complete. Launching workers. 00:15:04.409 ======================================================== 00:15:04.409 Latency(us) 00:15:04.409 Device Information : IOPS MiB/s Average min max 00:15:04.409 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40002.16 156.26 3199.71 867.06 6775.07 00:15:04.409 ======================================================== 00:15:04.409 Total : 40002.16 156.26 3199.71 867.06 6775.07 00:15:04.409 00:15:04.409 [2024-12-09 11:50:11.878838] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:04.409 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:04.409 [2024-12-09 11:50:12.073420] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:09.698 Initializing NVMe Controllers 00:15:09.698 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:09.698 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:09.698 Initialization complete. Launching workers. 00:15:09.698 ======================================================== 00:15:09.698 Latency(us) 00:15:09.698 Device Information : IOPS MiB/s Average min max 00:15:09.698 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39966.77 156.12 3202.54 869.41 6876.16 00:15:09.698 ======================================================== 00:15:09.698 Total : 39966.77 156.12 3202.54 869.41 6876.16 00:15:09.698 00:15:09.698 [2024-12-09 11:50:17.090845] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:09.698 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:09.698 [2024-12-09 11:50:17.297057] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:14.985 [2024-12-09 11:50:22.434722] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:14.985 Initializing NVMe Controllers 00:15:14.985 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:14.985 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:14.985 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:14.985 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:14.985 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:14.985 Initialization complete. Launching workers. 00:15:14.985 Starting thread on core 2 00:15:14.985 Starting thread on core 3 00:15:14.985 Starting thread on core 1 00:15:14.985 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:14.985 [2024-12-09 11:50:22.684021] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:18.285 [2024-12-09 11:50:25.738950] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:18.285 Initializing NVMe Controllers 00:15:18.285 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:18.285 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:18.285 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:18.285 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:18.285 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:18.285 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:18.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:18.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:18.285 Initialization complete. Launching workers. 00:15:18.285 Starting thread on core 1 with urgent priority queue 00:15:18.285 Starting thread on core 2 with urgent priority queue 00:15:18.285 Starting thread on core 3 with urgent priority queue 00:15:18.285 Starting thread on core 0 with urgent priority queue 00:15:18.285 SPDK bdev Controller (SPDK2 ) core 0: 11923.33 IO/s 8.39 secs/100000 ios 00:15:18.285 SPDK bdev Controller (SPDK2 ) core 1: 13512.67 IO/s 7.40 secs/100000 ios 00:15:18.285 SPDK bdev Controller (SPDK2 ) core 2: 17074.33 IO/s 5.86 secs/100000 ios 00:15:18.285 SPDK bdev Controller (SPDK2 ) core 3: 13611.67 IO/s 7.35 secs/100000 ios 00:15:18.285 ======================================================== 00:15:18.285 00:15:18.285 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:18.285 [2024-12-09 11:50:25.979016] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:18.285 Initializing NVMe Controllers 00:15:18.285 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:18.285 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:18.285 Namespace ID: 1 size: 0GB 00:15:18.285 Initialization complete. 00:15:18.285 INFO: using host memory buffer for IO 00:15:18.285 Hello world! 00:15:18.285 [2024-12-09 11:50:25.989073] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:18.285 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:18.545 [2024-12-09 11:50:26.224344] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:19.488 Initializing NVMe Controllers 00:15:19.488 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:19.488 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:19.488 Initialization complete. Launching workers. 00:15:19.488 submit (in ns) avg, min, max = 5607.9, 2830.8, 3998216.7 00:15:19.488 complete (in ns) avg, min, max = 14650.8, 1622.5, 3997438.3 00:15:19.488 00:15:19.488 Submit histogram 00:15:19.488 ================ 00:15:19.488 Range in us Cumulative Count 00:15:19.488 2.827 - 2.840: 0.4452% ( 88) 00:15:19.488 2.840 - 2.853: 1.1484% ( 139) 00:15:19.488 2.853 - 2.867: 3.4908% ( 463) 00:15:19.488 2.867 - 2.880: 8.4134% ( 973) 00:15:19.488 2.880 - 2.893: 13.5232% ( 1010) 00:15:19.488 2.893 - 2.907: 18.3497% ( 954) 00:15:19.488 2.907 - 2.920: 24.3398% ( 1184) 00:15:19.488 2.920 - 2.933: 30.4057% ( 1199) 00:15:19.488 2.933 - 2.947: 36.7854% ( 1261) 00:15:19.488 2.947 - 2.960: 41.0604% ( 845) 00:15:19.488 2.960 - 2.973: 45.1482% ( 808) 00:15:19.488 2.973 - 2.987: 50.2024% ( 999) 00:15:19.488 2.987 - 3.000: 57.9834% ( 1538) 00:15:19.488 3.000 - 3.013: 67.6414% ( 1909) 00:15:19.488 3.013 - 3.027: 77.4866% ( 1946) 00:15:19.488 3.027 - 3.040: 84.6150% ( 1409) 00:15:19.488 3.040 - 3.053: 90.3471% ( 1133) 00:15:19.488 3.053 - 3.067: 94.3944% ( 800) 00:15:19.488 3.067 - 3.080: 97.0808% ( 531) 00:15:19.488 3.080 - 3.093: 98.4873% ( 278) 00:15:19.488 3.093 - 3.107: 99.1956% ( 140) 00:15:19.488 3.107 - 3.120: 99.5244% ( 65) 00:15:19.488 3.120 - 3.133: 99.5902% ( 13) 00:15:19.488 3.133 - 3.147: 99.6206% ( 6) 00:15:19.488 3.147 - 3.160: 99.6357% ( 3) 00:15:19.488 3.493 - 3.520: 99.6459% ( 2) 00:15:19.488 4.133 - 4.160: 99.6509% ( 1) 00:15:19.488 4.213 - 4.240: 99.6560% ( 1) 00:15:19.488 4.267 - 4.293: 99.6610% ( 1) 00:15:19.488 4.560 - 4.587: 99.6661% ( 1) 00:15:19.488 4.587 - 4.613: 99.6762% ( 2) 00:15:19.488 4.693 - 4.720: 99.6863% ( 2) 00:15:19.488 4.720 - 4.747: 99.6914% ( 1) 00:15:19.488 4.747 - 4.773: 99.6964% ( 1) 00:15:19.488 4.773 - 4.800: 99.7015% ( 1) 00:15:19.488 4.800 - 4.827: 99.7116% ( 2) 00:15:19.488 4.907 - 4.933: 99.7167% ( 1) 00:15:19.488 4.960 - 4.987: 99.7217% ( 1) 00:15:19.488 4.987 - 5.013: 99.7319% ( 2) 00:15:19.488 5.013 - 5.040: 99.7369% ( 1) 00:15:19.488 5.040 - 5.067: 99.7420% ( 1) 00:15:19.488 5.120 - 5.147: 99.7521% ( 2) 00:15:19.488 5.147 - 5.173: 99.7673% ( 3) 00:15:19.488 5.173 - 5.200: 99.7774% ( 2) 00:15:19.488 5.227 - 5.253: 99.7926% ( 3) 00:15:19.488 5.253 - 5.280: 99.8078% ( 3) 00:15:19.488 5.360 - 5.387: 99.8128% ( 1) 00:15:19.488 5.467 - 5.493: 99.8179% ( 1) 00:15:19.488 5.493 - 5.520: 99.8229% ( 1) 00:15:19.488 5.520 - 5.547: 99.8330% ( 2) 00:15:19.488 5.547 - 5.573: 99.8381% ( 1) 00:15:19.488 5.573 - 5.600: 99.8432% ( 1) 00:15:19.488 5.600 - 5.627: 99.8482% ( 1) 00:15:19.488 5.760 - 5.787: 99.8533% ( 1) 00:15:19.488 5.840 - 5.867: 99.8634% ( 2) 00:15:19.488 5.867 - 5.893: 99.8735% ( 2) 00:15:19.488 5.893 - 5.920: 99.8786% ( 1) 00:15:19.488 6.000 - 6.027: 99.8836% ( 1) 00:15:19.488 6.133 - 6.160: 99.8887% ( 1) 00:15:19.488 6.373 - 6.400: 99.8938% ( 1) 00:15:19.488 6.533 - 6.560: 99.9039% ( 2) 00:15:19.488 6.587 - 6.613: 99.9089% ( 1) 00:15:19.488 7.040 - 7.093: 99.9140% ( 1) 00:15:19.488 7.253 - 7.307: 99.9191% ( 1) 00:15:19.488 7.467 - 7.520: 99.9241% ( 1) 00:15:19.488 7.733 - 7.787: 99.9292% ( 1) 00:15:19.488 8.213 - 8.267: 99.9342% ( 1) 00:15:19.488 3986.773 - 4014.080: 100.0000% ( 13) 00:15:19.488 00:15:19.488 Complete histogram 00:15:19.488 ================== 00:15:19.488 Range in us Cumulative Count 00:15:19.488 1.620 - 1.627: 0.0051% ( 1) 00:15:19.488 1.627 - 1.633: 0.0152% ( 2) 00:15:19.488 1.633 - 1.640: 0.6982% ( 135) 00:15:19.488 1.640 - 1.647: 1.1687% ( 93) 00:15:19.488 1.647 - 1.653: 1.2901% ( 24) 00:15:19.488 1.653 - 1.660: 1.4722% ( 36) 00:15:19.488 1.660 - 1.667: 1.5835% ( 22) 00:15:19.488 1.667 - 1.673: 1.6240% ( 8) 00:15:19.488 1.673 - 1.680: 1.6392% ( 3) 00:15:19.488 1.680 - [2024-12-09 11:50:27.319170] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:19.488 1.687: 11.9093% ( 2030) 00:15:19.488 1.687 - 1.693: 38.9811% ( 5351) 00:15:19.488 1.693 - 1.700: 45.0875% ( 1207) 00:15:19.488 1.700 - 1.707: 60.9987% ( 3145) 00:15:19.488 1.707 - 1.720: 78.5187% ( 3463) 00:15:19.488 1.720 - 1.733: 84.0635% ( 1096) 00:15:19.488 1.733 - 1.747: 85.8140% ( 346) 00:15:19.488 1.747 - 1.760: 89.2998% ( 689) 00:15:19.488 1.760 - 1.773: 93.9846% ( 926) 00:15:19.488 1.773 - 1.787: 97.4299% ( 681) 00:15:19.488 1.787 - 1.800: 99.0236% ( 315) 00:15:19.488 1.800 - 1.813: 99.4182% ( 78) 00:15:19.488 1.813 - 1.827: 99.5143% ( 19) 00:15:19.488 1.827 - 1.840: 99.5295% ( 3) 00:15:19.488 1.853 - 1.867: 99.5396% ( 2) 00:15:19.488 3.680 - 3.707: 99.5447% ( 1) 00:15:19.488 3.707 - 3.733: 99.5497% ( 1) 00:15:19.488 3.733 - 3.760: 99.5548% ( 1) 00:15:19.488 3.760 - 3.787: 99.5599% ( 1) 00:15:19.488 3.867 - 3.893: 99.5649% ( 1) 00:15:19.488 3.920 - 3.947: 99.5750% ( 2) 00:15:19.488 3.947 - 3.973: 99.5851% ( 2) 00:15:19.488 4.000 - 4.027: 99.5902% ( 1) 00:15:19.488 4.107 - 4.133: 99.5953% ( 1) 00:15:19.488 4.133 - 4.160: 99.6054% ( 2) 00:15:19.488 4.267 - 4.293: 99.6104% ( 1) 00:15:19.488 4.293 - 4.320: 99.6206% ( 2) 00:15:19.488 4.427 - 4.453: 99.6307% ( 2) 00:15:19.488 4.560 - 4.587: 99.6357% ( 1) 00:15:19.488 4.587 - 4.613: 99.6459% ( 2) 00:15:19.488 4.613 - 4.640: 99.6509% ( 1) 00:15:19.488 9.173 - 9.227: 99.6560% ( 1) 00:15:19.488 9.600 - 9.653: 99.6610% ( 1) 00:15:19.488 10.080 - 10.133: 99.6712% ( 2) 00:15:19.488 129.707 - 130.560: 99.6762% ( 1) 00:15:19.488 3986.773 - 4014.080: 100.0000% ( 64) 00:15:19.488 00:15:19.488 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:19.488 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:19.488 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:19.488 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:19.488 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:19.750 [ 00:15:19.750 { 00:15:19.750 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:19.750 "subtype": "Discovery", 00:15:19.750 "listen_addresses": [], 00:15:19.750 "allow_any_host": true, 00:15:19.750 "hosts": [] 00:15:19.750 }, 00:15:19.750 { 00:15:19.750 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:19.750 "subtype": "NVMe", 00:15:19.750 "listen_addresses": [ 00:15:19.750 { 00:15:19.750 "trtype": "VFIOUSER", 00:15:19.750 "adrfam": "IPv4", 00:15:19.750 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:19.750 "trsvcid": "0" 00:15:19.750 } 00:15:19.750 ], 00:15:19.750 "allow_any_host": true, 00:15:19.750 "hosts": [], 00:15:19.750 "serial_number": "SPDK1", 00:15:19.750 "model_number": "SPDK bdev Controller", 00:15:19.750 "max_namespaces": 32, 00:15:19.750 "min_cntlid": 1, 00:15:19.750 "max_cntlid": 65519, 00:15:19.750 "namespaces": [ 00:15:19.750 { 00:15:19.750 "nsid": 1, 00:15:19.750 "bdev_name": "Malloc1", 00:15:19.750 "name": "Malloc1", 00:15:19.750 "nguid": "CE975AB2D417450AB0016EC9C89A940E", 00:15:19.750 "uuid": "ce975ab2-d417-450a-b001-6ec9c89a940e" 00:15:19.750 }, 00:15:19.750 { 00:15:19.750 "nsid": 2, 00:15:19.750 "bdev_name": "Malloc3", 00:15:19.750 "name": "Malloc3", 00:15:19.750 "nguid": "04B0FFB1974D437882049C55E7492614", 00:15:19.750 "uuid": "04b0ffb1-974d-4378-8204-9c55e7492614" 00:15:19.750 } 00:15:19.750 ] 00:15:19.750 }, 00:15:19.750 { 00:15:19.750 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:19.750 "subtype": "NVMe", 00:15:19.750 "listen_addresses": [ 00:15:19.750 { 00:15:19.750 "trtype": "VFIOUSER", 00:15:19.750 "adrfam": "IPv4", 00:15:19.750 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:19.750 "trsvcid": "0" 00:15:19.750 } 00:15:19.750 ], 00:15:19.750 "allow_any_host": true, 00:15:19.750 "hosts": [], 00:15:19.750 "serial_number": "SPDK2", 00:15:19.750 "model_number": "SPDK bdev Controller", 00:15:19.750 "max_namespaces": 32, 00:15:19.750 "min_cntlid": 1, 00:15:19.750 "max_cntlid": 65519, 00:15:19.750 "namespaces": [ 00:15:19.750 { 00:15:19.750 "nsid": 1, 00:15:19.750 "bdev_name": "Malloc2", 00:15:19.750 "name": "Malloc2", 00:15:19.750 "nguid": "14E5C4596BC647D787C7BE0E40CBA048", 00:15:19.750 "uuid": "14e5c459-6bc6-47d7-87c7-be0e40cba048" 00:15:19.750 } 00:15:19.750 ] 00:15:19.750 } 00:15:19.750 ] 00:15:19.750 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:19.750 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:19.750 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=5127 00:15:19.750 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:19.750 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:19.750 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:19.750 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:19.750 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:19.750 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:19.750 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:20.015 [2024-12-09 11:50:27.681205] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:20.015 Malloc4 00:15:20.015 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:20.015 [2024-12-09 11:50:27.885624] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:20.293 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:20.293 Asynchronous Event Request test 00:15:20.293 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:20.293 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:20.293 Registering asynchronous event callbacks... 00:15:20.293 Starting namespace attribute notice tests for all controllers... 00:15:20.293 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:20.293 aer_cb - Changed Namespace 00:15:20.293 Cleaning up... 00:15:20.293 [ 00:15:20.293 { 00:15:20.293 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:20.293 "subtype": "Discovery", 00:15:20.293 "listen_addresses": [], 00:15:20.293 "allow_any_host": true, 00:15:20.293 "hosts": [] 00:15:20.293 }, 00:15:20.293 { 00:15:20.293 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:20.293 "subtype": "NVMe", 00:15:20.293 "listen_addresses": [ 00:15:20.293 { 00:15:20.293 "trtype": "VFIOUSER", 00:15:20.293 "adrfam": "IPv4", 00:15:20.293 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:20.293 "trsvcid": "0" 00:15:20.293 } 00:15:20.293 ], 00:15:20.293 "allow_any_host": true, 00:15:20.293 "hosts": [], 00:15:20.293 "serial_number": "SPDK1", 00:15:20.293 "model_number": "SPDK bdev Controller", 00:15:20.293 "max_namespaces": 32, 00:15:20.293 "min_cntlid": 1, 00:15:20.293 "max_cntlid": 65519, 00:15:20.293 "namespaces": [ 00:15:20.293 { 00:15:20.293 "nsid": 1, 00:15:20.293 "bdev_name": "Malloc1", 00:15:20.293 "name": "Malloc1", 00:15:20.293 "nguid": "CE975AB2D417450AB0016EC9C89A940E", 00:15:20.293 "uuid": "ce975ab2-d417-450a-b001-6ec9c89a940e" 00:15:20.293 }, 00:15:20.293 { 00:15:20.293 "nsid": 2, 00:15:20.293 "bdev_name": "Malloc3", 00:15:20.293 "name": "Malloc3", 00:15:20.293 "nguid": "04B0FFB1974D437882049C55E7492614", 00:15:20.293 "uuid": "04b0ffb1-974d-4378-8204-9c55e7492614" 00:15:20.293 } 00:15:20.293 ] 00:15:20.293 }, 00:15:20.293 { 00:15:20.293 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:20.293 "subtype": "NVMe", 00:15:20.293 "listen_addresses": [ 00:15:20.293 { 00:15:20.293 "trtype": "VFIOUSER", 00:15:20.293 "adrfam": "IPv4", 00:15:20.293 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:20.293 "trsvcid": "0" 00:15:20.293 } 00:15:20.293 ], 00:15:20.293 "allow_any_host": true, 00:15:20.293 "hosts": [], 00:15:20.293 "serial_number": "SPDK2", 00:15:20.293 "model_number": "SPDK bdev Controller", 00:15:20.293 "max_namespaces": 32, 00:15:20.293 "min_cntlid": 1, 00:15:20.293 "max_cntlid": 65519, 00:15:20.293 "namespaces": [ 00:15:20.293 { 00:15:20.293 "nsid": 1, 00:15:20.293 "bdev_name": "Malloc2", 00:15:20.293 "name": "Malloc2", 00:15:20.293 "nguid": "14E5C4596BC647D787C7BE0E40CBA048", 00:15:20.293 "uuid": "14e5c459-6bc6-47d7-87c7-be0e40cba048" 00:15:20.293 }, 00:15:20.293 { 00:15:20.293 "nsid": 2, 00:15:20.293 "bdev_name": "Malloc4", 00:15:20.293 "name": "Malloc4", 00:15:20.293 "nguid": "B93984978033409E9D4D9244F8EDCAD0", 00:15:20.293 "uuid": "b9398497-8033-409e-9d4d-9244f8edcad0" 00:15:20.293 } 00:15:20.293 ] 00:15:20.293 } 00:15:20.293 ] 00:15:20.293 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 5127 00:15:20.293 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:20.293 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 4189605 00:15:20.293 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 4189605 ']' 00:15:20.293 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 4189605 00:15:20.293 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:20.293 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:20.293 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4189605 00:15:20.293 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:20.293 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:20.293 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4189605' 00:15:20.293 killing process with pid 4189605 00:15:20.293 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 4189605 00:15:20.293 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 4189605 00:15:20.568 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:20.568 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:20.568 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:20.568 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:20.568 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:20.568 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=5435 00:15:20.568 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 5435' 00:15:20.568 Process pid: 5435 00:15:20.568 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:20.568 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:20.568 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 5435 00:15:20.568 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 5435 ']' 00:15:20.568 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.568 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:20.568 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.568 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:20.568 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:20.568 [2024-12-09 11:50:28.378577] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:20.568 [2024-12-09 11:50:28.379500] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:15:20.568 [2024-12-09 11:50:28.379542] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.858 [2024-12-09 11:50:28.461580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:20.858 [2024-12-09 11:50:28.491408] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.858 [2024-12-09 11:50:28.491440] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.858 [2024-12-09 11:50:28.491446] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.858 [2024-12-09 11:50:28.491454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.858 [2024-12-09 11:50:28.491458] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.858 [2024-12-09 11:50:28.492907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.858 [2024-12-09 11:50:28.493008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.858 [2024-12-09 11:50:28.493150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.858 [2024-12-09 11:50:28.493152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:20.858 [2024-12-09 11:50:28.545052] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:20.858 [2024-12-09 11:50:28.545236] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:20.858 [2024-12-09 11:50:28.545943] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:20.858 [2024-12-09 11:50:28.546960] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:20.858 [2024-12-09 11:50:28.547059] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:21.500 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:21.500 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:21.500 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:22.442 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:22.702 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:22.702 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:22.702 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:22.702 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:22.702 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:22.702 Malloc1 00:15:22.702 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:22.963 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:23.224 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:23.485 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:23.485 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:23.485 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:23.485 Malloc2 00:15:23.485 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:23.746 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:24.006 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:24.006 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:24.006 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 5435 00:15:24.006 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 5435 ']' 00:15:24.006 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 5435 00:15:24.006 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:24.006 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:24.006 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 5435 00:15:24.267 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:24.267 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:24.267 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 5435' 00:15:24.267 killing process with pid 5435 00:15:24.267 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 5435 00:15:24.267 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 5435 00:15:24.267 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:24.267 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:24.267 00:15:24.267 real 0m50.936s 00:15:24.267 user 3m15.373s 00:15:24.267 sys 0m2.629s 00:15:24.267 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:24.267 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:24.267 ************************************ 00:15:24.267 END TEST nvmf_vfio_user 00:15:24.267 ************************************ 00:15:24.267 11:50:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:24.267 11:50:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:24.267 11:50:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:24.267 11:50:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:24.267 ************************************ 00:15:24.267 START TEST nvmf_vfio_user_nvme_compliance 00:15:24.267 ************************************ 00:15:24.267 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:24.529 * Looking for test storage... 00:15:24.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:24.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.529 --rc genhtml_branch_coverage=1 00:15:24.529 --rc genhtml_function_coverage=1 00:15:24.529 --rc genhtml_legend=1 00:15:24.529 --rc geninfo_all_blocks=1 00:15:24.529 --rc geninfo_unexecuted_blocks=1 00:15:24.529 00:15:24.529 ' 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:24.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.529 --rc genhtml_branch_coverage=1 00:15:24.529 --rc genhtml_function_coverage=1 00:15:24.529 --rc genhtml_legend=1 00:15:24.529 --rc geninfo_all_blocks=1 00:15:24.529 --rc geninfo_unexecuted_blocks=1 00:15:24.529 00:15:24.529 ' 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:24.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.529 --rc genhtml_branch_coverage=1 00:15:24.529 --rc genhtml_function_coverage=1 00:15:24.529 --rc genhtml_legend=1 00:15:24.529 --rc geninfo_all_blocks=1 00:15:24.529 --rc geninfo_unexecuted_blocks=1 00:15:24.529 00:15:24.529 ' 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:24.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.529 --rc genhtml_branch_coverage=1 00:15:24.529 --rc genhtml_function_coverage=1 00:15:24.529 --rc genhtml_legend=1 00:15:24.529 --rc geninfo_all_blocks=1 00:15:24.529 --rc geninfo_unexecuted_blocks=1 00:15:24.529 00:15:24.529 ' 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.529 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # : 0 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:15:24.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@56 -- # have_pci_nics=0 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=6202 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 6202' 00:15:24.530 Process pid: 6202 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 6202 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 6202 ']' 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:24.530 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:24.530 [2024-12-09 11:50:32.405889] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:15:24.530 [2024-12-09 11:50:32.405958] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.791 [2024-12-09 11:50:32.491565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:24.791 [2024-12-09 11:50:32.530589] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.791 [2024-12-09 11:50:32.530626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.791 [2024-12-09 11:50:32.530632] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:24.791 [2024-12-09 11:50:32.530646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:24.791 [2024-12-09 11:50:32.530650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.791 [2024-12-09 11:50:32.531974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.791 [2024-12-09 11:50:32.532100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:24.791 [2024-12-09 11:50:32.532101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.361 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:25.361 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:25.361 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:26.742 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:26.742 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:26.742 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:26.742 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.742 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.742 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.742 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:26.742 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:26.742 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.742 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.742 malloc0 00:15:26.742 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.742 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:26.742 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.742 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.742 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.742 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:26.743 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.743 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.743 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.743 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:26.743 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.743 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.743 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.743 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:26.743 00:15:26.743 00:15:26.743 CUnit - A unit testing framework for C - Version 2.1-3 00:15:26.743 http://cunit.sourceforge.net/ 00:15:26.743 00:15:26.743 00:15:26.743 Suite: nvme_compliance 00:15:26.743 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-09 11:50:34.443100] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.743 [2024-12-09 11:50:34.444405] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:26.743 [2024-12-09 11:50:34.444416] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:26.743 [2024-12-09 11:50:34.444421] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:26.743 [2024-12-09 11:50:34.448138] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.743 passed 00:15:26.743 Test: admin_identify_ctrlr_verify_fused ...[2024-12-09 11:50:34.525655] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.743 [2024-12-09 11:50:34.528669] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.743 passed 00:15:26.743 Test: admin_identify_ns ...[2024-12-09 11:50:34.608176] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.003 [2024-12-09 11:50:34.668649] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:27.003 [2024-12-09 11:50:34.676647] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:27.003 [2024-12-09 11:50:34.697728] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.003 passed 00:15:27.003 Test: admin_get_features_mandatory_features ...[2024-12-09 11:50:34.770946] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.003 [2024-12-09 11:50:34.773964] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.003 passed 00:15:27.003 Test: admin_get_features_optional_features ...[2024-12-09 11:50:34.852434] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.003 [2024-12-09 11:50:34.855454] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.003 passed 00:15:27.264 Test: admin_set_features_number_of_queues ...[2024-12-09 11:50:34.933222] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.264 [2024-12-09 11:50:35.037741] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.264 passed 00:15:27.264 Test: admin_get_log_page_mandatory_logs ...[2024-12-09 11:50:35.110947] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.264 [2024-12-09 11:50:35.113957] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.264 passed 00:15:27.524 Test: admin_get_log_page_with_lpo ...[2024-12-09 11:50:35.191742] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.524 [2024-12-09 11:50:35.261648] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:27.524 [2024-12-09 11:50:35.274681] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.524 passed 00:15:27.524 Test: fabric_property_get ...[2024-12-09 11:50:35.348928] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.524 [2024-12-09 11:50:35.350131] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:27.524 [2024-12-09 11:50:35.351952] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.524 passed 00:15:27.785 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-09 11:50:35.430438] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.785 [2024-12-09 11:50:35.431634] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:27.785 [2024-12-09 11:50:35.433449] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.785 passed 00:15:27.785 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-09 11:50:35.512182] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.785 [2024-12-09 11:50:35.596644] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:27.785 [2024-12-09 11:50:35.612645] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:27.785 [2024-12-09 11:50:35.617722] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.785 passed 00:15:28.045 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-09 11:50:35.691946] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.045 [2024-12-09 11:50:35.693147] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:28.045 [2024-12-09 11:50:35.694961] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.045 passed 00:15:28.045 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-09 11:50:35.770996] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.045 [2024-12-09 11:50:35.850642] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:28.045 [2024-12-09 11:50:35.874642] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:28.045 [2024-12-09 11:50:35.879711] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.045 passed 00:15:28.306 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-09 11:50:35.951906] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.306 [2024-12-09 11:50:35.953107] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:28.306 [2024-12-09 11:50:35.953126] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:28.306 [2024-12-09 11:50:35.955933] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.306 passed 00:15:28.306 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-09 11:50:36.031998] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.306 [2024-12-09 11:50:36.123649] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:28.306 [2024-12-09 11:50:36.131648] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:28.306 [2024-12-09 11:50:36.139644] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:28.306 [2024-12-09 11:50:36.147643] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:28.306 [2024-12-09 11:50:36.176715] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.567 passed 00:15:28.567 Test: admin_create_io_sq_verify_pc ...[2024-12-09 11:50:36.249914] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.567 [2024-12-09 11:50:36.266652] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:28.567 [2024-12-09 11:50:36.284084] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.567 passed 00:15:28.567 Test: admin_create_io_qp_max_qps ...[2024-12-09 11:50:36.362561] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.951 [2024-12-09 11:50:37.468645] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:30.211 [2024-12-09 11:50:37.850038] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.211 passed 00:15:30.211 Test: admin_create_io_sq_shared_cq ...[2024-12-09 11:50:37.923002] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.211 [2024-12-09 11:50:38.058643] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:30.211 [2024-12-09 11:50:38.095690] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.472 passed 00:15:30.472 00:15:30.472 Run Summary: Type Total Ran Passed Failed Inactive 00:15:30.472 suites 1 1 n/a 0 0 00:15:30.472 tests 18 18 18 0 0 00:15:30.472 asserts 360 360 360 0 n/a 00:15:30.472 00:15:30.472 Elapsed time = 1.504 seconds 00:15:30.472 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 6202 00:15:30.472 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 6202 ']' 00:15:30.472 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 6202 00:15:30.472 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:30.472 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:30.472 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 6202 00:15:30.472 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:30.472 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:30.472 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 6202' 00:15:30.472 killing process with pid 6202 00:15:30.472 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 6202 00:15:30.472 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 6202 00:15:30.472 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:30.472 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:30.472 00:15:30.472 real 0m6.198s 00:15:30.472 user 0m17.585s 00:15:30.472 sys 0m0.520s 00:15:30.472 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:30.472 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:30.472 ************************************ 00:15:30.472 END TEST nvmf_vfio_user_nvme_compliance 00:15:30.472 ************************************ 00:15:30.733 11:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:30.733 11:50:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:30.733 11:50:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:30.733 11:50:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:30.733 ************************************ 00:15:30.733 START TEST nvmf_vfio_user_fuzz 00:15:30.733 ************************************ 00:15:30.733 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:30.733 * Looking for test storage... 00:15:30.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:30.733 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:30.733 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:15:30.733 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:30.733 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:30.733 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:30.733 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:30.733 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:30.733 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:30.733 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:30.733 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:30.733 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:30.733 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:30.733 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:30.733 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:30.733 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:30.733 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:30.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.734 --rc genhtml_branch_coverage=1 00:15:30.734 --rc genhtml_function_coverage=1 00:15:30.734 --rc genhtml_legend=1 00:15:30.734 --rc geninfo_all_blocks=1 00:15:30.734 --rc geninfo_unexecuted_blocks=1 00:15:30.734 00:15:30.734 ' 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:30.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.734 --rc genhtml_branch_coverage=1 00:15:30.734 --rc genhtml_function_coverage=1 00:15:30.734 --rc genhtml_legend=1 00:15:30.734 --rc geninfo_all_blocks=1 00:15:30.734 --rc geninfo_unexecuted_blocks=1 00:15:30.734 00:15:30.734 ' 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:30.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.734 --rc genhtml_branch_coverage=1 00:15:30.734 --rc genhtml_function_coverage=1 00:15:30.734 --rc genhtml_legend=1 00:15:30.734 --rc geninfo_all_blocks=1 00:15:30.734 --rc geninfo_unexecuted_blocks=1 00:15:30.734 00:15:30.734 ' 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:30.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.734 --rc genhtml_branch_coverage=1 00:15:30.734 --rc genhtml_function_coverage=1 00:15:30.734 --rc genhtml_legend=1 00:15:30.734 --rc geninfo_all_blocks=1 00:15:30.734 --rc geninfo_unexecuted_blocks=1 00:15:30.734 00:15:30.734 ' 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # : 0 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:15:30.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@56 -- # have_pci_nics=0 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=7602 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 7602' 00:15:30.734 Process pid: 7602 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 7602 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 7602 ']' 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.734 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:30.735 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.735 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:30.735 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:30.735 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:31.674 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:31.674 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:31.674 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:32.612 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:32.612 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.612 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.612 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.612 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:32.612 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:32.612 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.612 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.612 malloc0 00:15:32.612 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.612 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:32.612 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.612 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.612 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.612 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:32.612 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.612 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.612 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.612 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:32.612 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.612 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.612 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.612 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:32.613 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:04.717 Fuzzing completed. Shutting down the fuzz application 00:16:04.717 00:16:04.717 Dumping successful admin opcodes: 00:16:04.717 9, 10, 00:16:04.717 Dumping successful io opcodes: 00:16:04.717 0, 00:16:04.717 NS: 0x20000081ef00 I/O qp, Total commands completed: 1413679, total successful commands: 5556, random_seed: 3296217088 00:16:04.718 NS: 0x20000081ef00 admin qp, Total commands completed: 350048, total successful commands: 94, random_seed: 3128056384 00:16:04.718 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:04.718 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.718 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:04.718 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.718 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 7602 00:16:04.718 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 7602 ']' 00:16:04.718 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 7602 00:16:04.718 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:04.718 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:04.718 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 7602 00:16:04.718 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:04.718 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:04.718 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 7602' 00:16:04.718 killing process with pid 7602 00:16:04.718 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 7602 00:16:04.718 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 7602 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:04.718 00:16:04.718 real 0m32.751s 00:16:04.718 user 0m38.148s 00:16:04.718 sys 0m23.983s 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:04.718 ************************************ 00:16:04.718 END TEST nvmf_vfio_user_fuzz 00:16:04.718 ************************************ 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:04.718 ************************************ 00:16:04.718 START TEST nvmf_auth_target 00:16:04.718 ************************************ 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:04.718 * Looking for test storage... 00:16:04.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:04.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.718 --rc genhtml_branch_coverage=1 00:16:04.718 --rc genhtml_function_coverage=1 00:16:04.718 --rc genhtml_legend=1 00:16:04.718 --rc geninfo_all_blocks=1 00:16:04.718 --rc geninfo_unexecuted_blocks=1 00:16:04.718 00:16:04.718 ' 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:04.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.718 --rc genhtml_branch_coverage=1 00:16:04.718 --rc genhtml_function_coverage=1 00:16:04.718 --rc genhtml_legend=1 00:16:04.718 --rc geninfo_all_blocks=1 00:16:04.718 --rc geninfo_unexecuted_blocks=1 00:16:04.718 00:16:04.718 ' 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:04.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.718 --rc genhtml_branch_coverage=1 00:16:04.718 --rc genhtml_function_coverage=1 00:16:04.718 --rc genhtml_legend=1 00:16:04.718 --rc geninfo_all_blocks=1 00:16:04.718 --rc geninfo_unexecuted_blocks=1 00:16:04.718 00:16:04.718 ' 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:04.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.718 --rc genhtml_branch_coverage=1 00:16:04.718 --rc genhtml_function_coverage=1 00:16:04.718 --rc genhtml_legend=1 00:16:04.718 --rc geninfo_all_blocks=1 00:16:04.718 --rc geninfo_unexecuted_blocks=1 00:16:04.718 00:16:04.718 ' 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.718 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # : 0 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:16:04.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@56 -- # have_pci_nics=0 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # xtrace_disable 00:16:04.719 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.320 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:11.320 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_devs=() 00:16:11.320 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_devs 00:16:11.320 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_net_devs=() 00:16:11.320 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:16:11.320 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # pci_drivers=() 00:16:11.320 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # local -A pci_drivers 00:16:11.320 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # net_devs=() 00:16:11.320 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga net_devs 00:16:11.320 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # e810=() 00:16:11.320 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga e810 00:16:11.320 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # x722=() 00:16:11.320 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga x722 00:16:11.320 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@323 -- # mlx=() 00:16:11.320 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@323 -- # local -ga mlx 00:16:11.320 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:11.320 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:11.320 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:11.321 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:11.321 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:11.321 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:11.321 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # is_hw=yes 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:16:11.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:11.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:16:11.321 00:16:11.321 --- 10.0.0.2 ping statistics --- 00:16:11.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.321 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:11.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:11.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:16:11.321 00:16:11.321 --- 10.0.0.1 ping statistics --- 00:16:11.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.321 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # return 0 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=18151 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 18151 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:11.321 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 18151 ']' 00:16:11.322 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.322 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:11.322 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.322 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:11.322 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=18186 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=3b9c2fc49968d9e9dcb61e32ee0e187f9ceb7e6731e4cf30 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.hCc 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 3b9c2fc49968d9e9dcb61e32ee0e187f9ceb7e6731e4cf30 0 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 3b9c2fc49968d9e9dcb61e32ee0e187f9ceb7e6731e4cf30 0 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=3b9c2fc49968d9e9dcb61e32ee0e187f9ceb7e6731e4cf30 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.hCc 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.hCc 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.hCc 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=ef7ffd2cc61b4f86eb4aa8d30ae1747c53006b0c2d4b2bee9506cb1ca8291967 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.fcA 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key ef7ffd2cc61b4f86eb4aa8d30ae1747c53006b0c2d4b2bee9506cb1ca8291967 3 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 ef7ffd2cc61b4f86eb4aa8d30ae1747c53006b0c2d4b2bee9506cb1ca8291967 3 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=ef7ffd2cc61b4f86eb4aa8d30ae1747c53006b0c2d4b2bee9506cb1ca8291967 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.fcA 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.fcA 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.fcA 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:16:11.896 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=c0849f8525d61f347437aa74ab8aeeb1 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.BGq 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key c0849f8525d61f347437aa74ab8aeeb1 1 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 c0849f8525d61f347437aa74ab8aeeb1 1 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=c0849f8525d61f347437aa74ab8aeeb1 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.BGq 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.BGq 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.BGq 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=15051875a72318c7f8a0fd1f9cac3db96543722864b4d839 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.1SO 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 15051875a72318c7f8a0fd1f9cac3db96543722864b4d839 2 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 15051875a72318c7f8a0fd1f9cac3db96543722864b4d839 2 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=15051875a72318c7f8a0fd1f9cac3db96543722864b4d839 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.1SO 00:16:12.158 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.1SO 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.1SO 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=292607c8f979bf8238b42b3a8cf37b249ccef9a1f0146e75 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.lp9 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 292607c8f979bf8238b42b3a8cf37b249ccef9a1f0146e75 2 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 292607c8f979bf8238b42b3a8cf37b249ccef9a1f0146e75 2 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=292607c8f979bf8238b42b3a8cf37b249ccef9a1f0146e75 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.lp9 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.lp9 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.lp9 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=7318dbeedd96f2b9ecacbcfb842a0725 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:16:12.159 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.Q5D 00:16:12.159 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 7318dbeedd96f2b9ecacbcfb842a0725 1 00:16:12.159 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 7318dbeedd96f2b9ecacbcfb842a0725 1 00:16:12.159 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:16:12.159 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:16:12.159 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=7318dbeedd96f2b9ecacbcfb842a0725 00:16:12.159 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:16:12.159 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.Q5D 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.Q5D 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Q5D 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=b667f60359663b4826e1f020f357c3a0f3f9bd4ed0ed753f3f15398b7de90c40 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.BUf 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key b667f60359663b4826e1f020f357c3a0f3f9bd4ed0ed753f3f15398b7de90c40 3 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 b667f60359663b4826e1f020f357c3a0f3f9bd4ed0ed753f3f15398b7de90c40 3 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=b667f60359663b4826e1f020f357c3a0f3f9bd4ed0ed753f3f15398b7de90c40 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.BUf 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.BUf 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.BUf 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 18151 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 18151 ']' 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 18186 /var/tmp/host.sock 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 18186 ']' 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:12.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.420 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.682 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:12.682 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:12.682 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:12.682 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.682 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.682 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.682 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:12.682 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.hCc 00:16:12.682 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.682 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.682 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.682 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.hCc 00:16:12.682 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.hCc 00:16:12.943 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.fcA ]] 00:16:12.943 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fcA 00:16:12.943 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.943 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.943 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.943 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fcA 00:16:12.943 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fcA 00:16:13.204 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:13.204 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.BGq 00:16:13.204 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.204 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.204 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.204 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.BGq 00:16:13.204 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.BGq 00:16:13.465 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.1SO ]] 00:16:13.465 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1SO 00:16:13.465 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.465 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.465 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.465 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1SO 00:16:13.465 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1SO 00:16:13.465 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:13.465 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.lp9 00:16:13.465 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.465 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.465 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.465 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.lp9 00:16:13.465 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.lp9 00:16:13.727 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Q5D ]] 00:16:13.727 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Q5D 00:16:13.727 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.727 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.727 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.727 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Q5D 00:16:13.727 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Q5D 00:16:13.989 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:13.989 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.BUf 00:16:13.989 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.989 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.989 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.989 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.BUf 00:16:13.989 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.BUf 00:16:14.250 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:14.250 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:14.251 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:14.251 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.251 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:14.251 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:14.251 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:14.251 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.251 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:14.251 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:14.251 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:14.251 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.251 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.251 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.251 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.251 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.251 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.251 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.251 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.512 00:16:14.512 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.512 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.512 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.774 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.774 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.774 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.774 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.774 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.774 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.774 { 00:16:14.774 "cntlid": 1, 00:16:14.774 "qid": 0, 00:16:14.774 "state": "enabled", 00:16:14.774 "thread": "nvmf_tgt_poll_group_000", 00:16:14.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:14.774 "listen_address": { 00:16:14.774 "trtype": "TCP", 00:16:14.774 "adrfam": "IPv4", 00:16:14.774 "traddr": "10.0.0.2", 00:16:14.774 "trsvcid": "4420" 00:16:14.774 }, 00:16:14.774 "peer_address": { 00:16:14.774 "trtype": "TCP", 00:16:14.774 "adrfam": "IPv4", 00:16:14.774 "traddr": "10.0.0.1", 00:16:14.774 "trsvcid": "50470" 00:16:14.774 }, 00:16:14.774 "auth": { 00:16:14.774 "state": "completed", 00:16:14.774 "digest": "sha256", 00:16:14.774 "dhgroup": "null" 00:16:14.774 } 00:16:14.774 } 00:16:14.774 ]' 00:16:14.774 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.774 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.774 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.774 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:14.774 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.774 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.774 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.774 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.036 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:16:15.036 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:16:15.978 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.978 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:15.979 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.979 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.979 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.979 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.979 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:15.979 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:15.979 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:15.979 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.979 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:15.979 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:15.979 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:15.979 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.979 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.979 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.979 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.979 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.979 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.979 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.979 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.239 00:16:16.239 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.239 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.239 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.501 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.501 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.501 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.501 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.501 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.501 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.501 { 00:16:16.501 "cntlid": 3, 00:16:16.501 "qid": 0, 00:16:16.501 "state": "enabled", 00:16:16.501 "thread": "nvmf_tgt_poll_group_000", 00:16:16.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:16.501 "listen_address": { 00:16:16.501 "trtype": "TCP", 00:16:16.501 "adrfam": "IPv4", 00:16:16.501 "traddr": "10.0.0.2", 00:16:16.501 "trsvcid": "4420" 00:16:16.501 }, 00:16:16.501 "peer_address": { 00:16:16.501 "trtype": "TCP", 00:16:16.501 "adrfam": "IPv4", 00:16:16.501 "traddr": "10.0.0.1", 00:16:16.501 "trsvcid": "35282" 00:16:16.501 }, 00:16:16.501 "auth": { 00:16:16.501 "state": "completed", 00:16:16.501 "digest": "sha256", 00:16:16.501 "dhgroup": "null" 00:16:16.501 } 00:16:16.501 } 00:16:16.501 ]' 00:16:16.501 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.501 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.501 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.501 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:16.501 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.501 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.501 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.501 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.762 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:16:16.762 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:16:17.335 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.335 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:17.335 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.335 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.335 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.335 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.335 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:17.335 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:17.595 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:17.595 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.595 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:17.595 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:17.595 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:17.595 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.595 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.595 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.595 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.595 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.595 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.595 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.595 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.855 00:16:17.855 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.855 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.855 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.855 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.855 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.855 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.855 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.855 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.855 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.855 { 00:16:17.855 "cntlid": 5, 00:16:17.855 "qid": 0, 00:16:17.855 "state": "enabled", 00:16:17.855 "thread": "nvmf_tgt_poll_group_000", 00:16:17.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:17.855 "listen_address": { 00:16:17.855 "trtype": "TCP", 00:16:17.855 "adrfam": "IPv4", 00:16:17.855 "traddr": "10.0.0.2", 00:16:17.855 "trsvcid": "4420" 00:16:17.855 }, 00:16:17.855 "peer_address": { 00:16:17.855 "trtype": "TCP", 00:16:17.855 "adrfam": "IPv4", 00:16:17.855 "traddr": "10.0.0.1", 00:16:17.855 "trsvcid": "35306" 00:16:17.855 }, 00:16:17.855 "auth": { 00:16:17.855 "state": "completed", 00:16:17.855 "digest": "sha256", 00:16:17.856 "dhgroup": "null" 00:16:17.856 } 00:16:17.856 } 00:16:17.856 ]' 00:16:17.856 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.116 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.116 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.116 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:18.116 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.116 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.116 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.116 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.377 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:16:18.377 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:16:18.947 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.948 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:18.948 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.948 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.948 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.948 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.948 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:18.948 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:19.208 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:19.208 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.208 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:19.208 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:19.208 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:19.208 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.208 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:19.208 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.208 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.208 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.208 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:19.208 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.208 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.208 00:16:19.469 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.469 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.469 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.469 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.469 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.469 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.469 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.469 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.469 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.469 { 00:16:19.469 "cntlid": 7, 00:16:19.469 "qid": 0, 00:16:19.469 "state": "enabled", 00:16:19.469 "thread": "nvmf_tgt_poll_group_000", 00:16:19.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:19.469 "listen_address": { 00:16:19.469 "trtype": "TCP", 00:16:19.469 "adrfam": "IPv4", 00:16:19.469 "traddr": "10.0.0.2", 00:16:19.469 "trsvcid": "4420" 00:16:19.469 }, 00:16:19.469 "peer_address": { 00:16:19.469 "trtype": "TCP", 00:16:19.469 "adrfam": "IPv4", 00:16:19.469 "traddr": "10.0.0.1", 00:16:19.469 "trsvcid": "35340" 00:16:19.469 }, 00:16:19.469 "auth": { 00:16:19.469 "state": "completed", 00:16:19.469 "digest": "sha256", 00:16:19.469 "dhgroup": "null" 00:16:19.469 } 00:16:19.469 } 00:16:19.469 ]' 00:16:19.469 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.469 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.469 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.729 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:19.729 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.729 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.729 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.729 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.729 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:16:19.729 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:16:20.670 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.670 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:20.670 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.670 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.670 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.670 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.670 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.670 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:20.670 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:20.671 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:20.671 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.671 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:20.671 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:20.671 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:20.671 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.671 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.671 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.671 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.671 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.671 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.671 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.671 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.931 00:16:20.931 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.931 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.931 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.931 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.931 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.931 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.931 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.191 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.191 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.191 { 00:16:21.191 "cntlid": 9, 00:16:21.191 "qid": 0, 00:16:21.191 "state": "enabled", 00:16:21.191 "thread": "nvmf_tgt_poll_group_000", 00:16:21.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:21.191 "listen_address": { 00:16:21.191 "trtype": "TCP", 00:16:21.191 "adrfam": "IPv4", 00:16:21.191 "traddr": "10.0.0.2", 00:16:21.191 "trsvcid": "4420" 00:16:21.191 }, 00:16:21.191 "peer_address": { 00:16:21.191 "trtype": "TCP", 00:16:21.191 "adrfam": "IPv4", 00:16:21.191 "traddr": "10.0.0.1", 00:16:21.191 "trsvcid": "35372" 00:16:21.191 }, 00:16:21.191 "auth": { 00:16:21.191 "state": "completed", 00:16:21.191 "digest": "sha256", 00:16:21.191 "dhgroup": "ffdhe2048" 00:16:21.191 } 00:16:21.191 } 00:16:21.191 ]' 00:16:21.191 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.191 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.191 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.191 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:21.191 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.191 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.191 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.191 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.451 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:16:21.451 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:16:22.020 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.020 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:22.020 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.020 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.020 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.020 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.020 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:22.020 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:22.280 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:22.280 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.280 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:22.280 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:22.280 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:22.280 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.280 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.280 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.280 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.280 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.280 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.280 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.281 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.541 00:16:22.541 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.541 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.541 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.801 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.801 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.801 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.801 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.801 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.801 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.801 { 00:16:22.801 "cntlid": 11, 00:16:22.801 "qid": 0, 00:16:22.801 "state": "enabled", 00:16:22.801 "thread": "nvmf_tgt_poll_group_000", 00:16:22.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:22.801 "listen_address": { 00:16:22.801 "trtype": "TCP", 00:16:22.801 "adrfam": "IPv4", 00:16:22.801 "traddr": "10.0.0.2", 00:16:22.801 "trsvcid": "4420" 00:16:22.801 }, 00:16:22.801 "peer_address": { 00:16:22.801 "trtype": "TCP", 00:16:22.801 "adrfam": "IPv4", 00:16:22.801 "traddr": "10.0.0.1", 00:16:22.802 "trsvcid": "35390" 00:16:22.802 }, 00:16:22.802 "auth": { 00:16:22.802 "state": "completed", 00:16:22.802 "digest": "sha256", 00:16:22.802 "dhgroup": "ffdhe2048" 00:16:22.802 } 00:16:22.802 } 00:16:22.802 ]' 00:16:22.802 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.802 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.802 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.802 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:22.802 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.802 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.802 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.802 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.061 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:16:23.061 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:16:23.631 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.631 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:23.631 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.631 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.631 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.631 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.631 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:23.631 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:23.891 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:23.891 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.891 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:23.891 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:23.891 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:23.891 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.891 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.891 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.891 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.891 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.891 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.891 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.891 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.891 00:16:24.151 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.151 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.151 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.151 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.151 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.151 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.151 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.151 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.151 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.151 { 00:16:24.151 "cntlid": 13, 00:16:24.151 "qid": 0, 00:16:24.151 "state": "enabled", 00:16:24.151 "thread": "nvmf_tgt_poll_group_000", 00:16:24.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:24.151 "listen_address": { 00:16:24.151 "trtype": "TCP", 00:16:24.151 "adrfam": "IPv4", 00:16:24.151 "traddr": "10.0.0.2", 00:16:24.151 "trsvcid": "4420" 00:16:24.151 }, 00:16:24.151 "peer_address": { 00:16:24.151 "trtype": "TCP", 00:16:24.151 "adrfam": "IPv4", 00:16:24.151 "traddr": "10.0.0.1", 00:16:24.151 "trsvcid": "35410" 00:16:24.151 }, 00:16:24.151 "auth": { 00:16:24.151 "state": "completed", 00:16:24.151 "digest": "sha256", 00:16:24.151 "dhgroup": "ffdhe2048" 00:16:24.151 } 00:16:24.151 } 00:16:24.151 ]' 00:16:24.151 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.151 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.151 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.412 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:24.412 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.412 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.412 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.412 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.412 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:16:24.412 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:16:24.982 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.242 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:25.242 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.242 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.242 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.242 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.242 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:25.242 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:25.242 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:25.242 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.242 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.242 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:25.242 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:25.242 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.242 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:25.242 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.242 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.242 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.242 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:25.242 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.242 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.501 00:16:25.501 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.501 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.501 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.761 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.761 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.761 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.761 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.761 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.761 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.761 { 00:16:25.761 "cntlid": 15, 00:16:25.761 "qid": 0, 00:16:25.761 "state": "enabled", 00:16:25.761 "thread": "nvmf_tgt_poll_group_000", 00:16:25.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:25.761 "listen_address": { 00:16:25.761 "trtype": "TCP", 00:16:25.761 "adrfam": "IPv4", 00:16:25.761 "traddr": "10.0.0.2", 00:16:25.761 "trsvcid": "4420" 00:16:25.761 }, 00:16:25.761 "peer_address": { 00:16:25.761 "trtype": "TCP", 00:16:25.761 "adrfam": "IPv4", 00:16:25.761 "traddr": "10.0.0.1", 00:16:25.761 "trsvcid": "35438" 00:16:25.761 }, 00:16:25.761 "auth": { 00:16:25.761 "state": "completed", 00:16:25.761 "digest": "sha256", 00:16:25.761 "dhgroup": "ffdhe2048" 00:16:25.761 } 00:16:25.761 } 00:16:25.761 ]' 00:16:25.761 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.761 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.761 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.761 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:25.761 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.021 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.021 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.021 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.021 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:16:26.021 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:16:26.599 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.599 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:26.599 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.599 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.599 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.599 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.599 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.599 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:26.859 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:26.859 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:26.859 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.859 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:26.859 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:26.859 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:26.859 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.859 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.859 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.859 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.859 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.859 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.859 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.859 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.120 00:16:27.120 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.120 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.120 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.384 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.384 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.384 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.384 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.384 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.384 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.384 { 00:16:27.384 "cntlid": 17, 00:16:27.384 "qid": 0, 00:16:27.384 "state": "enabled", 00:16:27.384 "thread": "nvmf_tgt_poll_group_000", 00:16:27.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:27.384 "listen_address": { 00:16:27.384 "trtype": "TCP", 00:16:27.384 "adrfam": "IPv4", 00:16:27.384 "traddr": "10.0.0.2", 00:16:27.384 "trsvcid": "4420" 00:16:27.384 }, 00:16:27.384 "peer_address": { 00:16:27.384 "trtype": "TCP", 00:16:27.384 "adrfam": "IPv4", 00:16:27.384 "traddr": "10.0.0.1", 00:16:27.384 "trsvcid": "54136" 00:16:27.384 }, 00:16:27.384 "auth": { 00:16:27.384 "state": "completed", 00:16:27.384 "digest": "sha256", 00:16:27.384 "dhgroup": "ffdhe3072" 00:16:27.384 } 00:16:27.384 } 00:16:27.384 ]' 00:16:27.384 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.384 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.384 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.384 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:27.384 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.384 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.384 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.384 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.645 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:16:27.645 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:16:28.215 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.215 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:28.215 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.215 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.215 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.215 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.215 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:28.215 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:28.476 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:28.476 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.476 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:28.476 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:28.476 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:28.476 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.476 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.476 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.476 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.476 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.476 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.476 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.476 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.736 00:16:28.736 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.736 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.736 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.996 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.996 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.996 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.996 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.996 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.996 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.996 { 00:16:28.996 "cntlid": 19, 00:16:28.996 "qid": 0, 00:16:28.996 "state": "enabled", 00:16:28.996 "thread": "nvmf_tgt_poll_group_000", 00:16:28.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:28.996 "listen_address": { 00:16:28.996 "trtype": "TCP", 00:16:28.996 "adrfam": "IPv4", 00:16:28.996 "traddr": "10.0.0.2", 00:16:28.996 "trsvcid": "4420" 00:16:28.996 }, 00:16:28.996 "peer_address": { 00:16:28.996 "trtype": "TCP", 00:16:28.996 "adrfam": "IPv4", 00:16:28.996 "traddr": "10.0.0.1", 00:16:28.996 "trsvcid": "54162" 00:16:28.996 }, 00:16:28.996 "auth": { 00:16:28.996 "state": "completed", 00:16:28.996 "digest": "sha256", 00:16:28.996 "dhgroup": "ffdhe3072" 00:16:28.996 } 00:16:28.996 } 00:16:28.996 ]' 00:16:28.996 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.996 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.996 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.996 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:28.996 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.996 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.996 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.996 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.257 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:16:29.257 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:16:29.829 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.829 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:29.829 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.829 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.829 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.829 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.829 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:29.829 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:30.090 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:30.090 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.090 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:30.090 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:30.090 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:30.090 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.090 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.090 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.090 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.090 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.090 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.090 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.090 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.351 00:16:30.351 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.351 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.351 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.351 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.351 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.351 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.351 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.351 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.351 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.351 { 00:16:30.351 "cntlid": 21, 00:16:30.351 "qid": 0, 00:16:30.351 "state": "enabled", 00:16:30.351 "thread": "nvmf_tgt_poll_group_000", 00:16:30.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:30.351 "listen_address": { 00:16:30.351 "trtype": "TCP", 00:16:30.351 "adrfam": "IPv4", 00:16:30.351 "traddr": "10.0.0.2", 00:16:30.351 "trsvcid": "4420" 00:16:30.351 }, 00:16:30.351 "peer_address": { 00:16:30.351 "trtype": "TCP", 00:16:30.351 "adrfam": "IPv4", 00:16:30.351 "traddr": "10.0.0.1", 00:16:30.351 "trsvcid": "54190" 00:16:30.351 }, 00:16:30.351 "auth": { 00:16:30.351 "state": "completed", 00:16:30.351 "digest": "sha256", 00:16:30.351 "dhgroup": "ffdhe3072" 00:16:30.351 } 00:16:30.351 } 00:16:30.351 ]' 00:16:30.351 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.611 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.611 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.611 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:30.611 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.611 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.611 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.611 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.872 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:16:30.872 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:16:31.443 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.443 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:31.443 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.443 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.443 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.443 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.443 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:31.443 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:31.704 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:31.704 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.704 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:31.704 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:31.704 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:31.705 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.705 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:31.705 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.705 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.705 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.705 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:31.705 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.705 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.965 00:16:31.965 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.965 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.965 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.965 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.965 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.965 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.965 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.965 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.965 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.965 { 00:16:31.965 "cntlid": 23, 00:16:31.965 "qid": 0, 00:16:31.965 "state": "enabled", 00:16:31.965 "thread": "nvmf_tgt_poll_group_000", 00:16:31.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:31.965 "listen_address": { 00:16:31.965 "trtype": "TCP", 00:16:31.965 "adrfam": "IPv4", 00:16:31.965 "traddr": "10.0.0.2", 00:16:31.965 "trsvcid": "4420" 00:16:31.965 }, 00:16:31.965 "peer_address": { 00:16:31.965 "trtype": "TCP", 00:16:31.965 "adrfam": "IPv4", 00:16:31.965 "traddr": "10.0.0.1", 00:16:31.965 "trsvcid": "54214" 00:16:31.965 }, 00:16:31.965 "auth": { 00:16:31.965 "state": "completed", 00:16:31.965 "digest": "sha256", 00:16:31.965 "dhgroup": "ffdhe3072" 00:16:31.965 } 00:16:31.965 } 00:16:31.965 ]' 00:16:31.965 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.225 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.225 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.225 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:32.225 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.226 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.226 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.226 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.486 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:16:32.486 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:16:33.058 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.058 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:33.058 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.058 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.058 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.058 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.058 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.058 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:33.058 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:33.319 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:33.319 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.319 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:33.319 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:33.319 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:33.319 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.319 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.319 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.319 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.319 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.319 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.319 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.319 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.579 00:16:33.579 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.579 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.579 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.579 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.579 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.579 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.579 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.579 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.579 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.579 { 00:16:33.579 "cntlid": 25, 00:16:33.579 "qid": 0, 00:16:33.579 "state": "enabled", 00:16:33.579 "thread": "nvmf_tgt_poll_group_000", 00:16:33.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:33.579 "listen_address": { 00:16:33.579 "trtype": "TCP", 00:16:33.579 "adrfam": "IPv4", 00:16:33.579 "traddr": "10.0.0.2", 00:16:33.579 "trsvcid": "4420" 00:16:33.579 }, 00:16:33.579 "peer_address": { 00:16:33.579 "trtype": "TCP", 00:16:33.579 "adrfam": "IPv4", 00:16:33.579 "traddr": "10.0.0.1", 00:16:33.579 "trsvcid": "54256" 00:16:33.579 }, 00:16:33.579 "auth": { 00:16:33.579 "state": "completed", 00:16:33.579 "digest": "sha256", 00:16:33.579 "dhgroup": "ffdhe4096" 00:16:33.579 } 00:16:33.579 } 00:16:33.579 ]' 00:16:33.579 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.843 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.843 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.843 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:33.843 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.843 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.843 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.843 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.104 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:16:34.104 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:16:34.673 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.673 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:34.673 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.673 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.673 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.673 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.673 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:34.673 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:34.933 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:34.933 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.933 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.933 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:34.933 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:34.933 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.933 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.933 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.933 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.933 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.933 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.933 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.933 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.194 00:16:35.194 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.194 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.194 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.194 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.194 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.194 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.194 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.194 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.194 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.194 { 00:16:35.194 "cntlid": 27, 00:16:35.194 "qid": 0, 00:16:35.194 "state": "enabled", 00:16:35.194 "thread": "nvmf_tgt_poll_group_000", 00:16:35.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:35.194 "listen_address": { 00:16:35.194 "trtype": "TCP", 00:16:35.194 "adrfam": "IPv4", 00:16:35.194 "traddr": "10.0.0.2", 00:16:35.194 "trsvcid": "4420" 00:16:35.194 }, 00:16:35.194 "peer_address": { 00:16:35.194 "trtype": "TCP", 00:16:35.194 "adrfam": "IPv4", 00:16:35.194 "traddr": "10.0.0.1", 00:16:35.194 "trsvcid": "54298" 00:16:35.194 }, 00:16:35.194 "auth": { 00:16:35.194 "state": "completed", 00:16:35.194 "digest": "sha256", 00:16:35.194 "dhgroup": "ffdhe4096" 00:16:35.194 } 00:16:35.194 } 00:16:35.194 ]' 00:16:35.454 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.454 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.454 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.454 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:35.454 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.454 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.454 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.454 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.713 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:16:35.713 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:16:36.282 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.282 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:36.282 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.283 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.283 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.283 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.283 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:36.283 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:36.542 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:36.542 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.542 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:36.542 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:36.542 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:36.542 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.542 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.542 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.542 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.542 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.542 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.542 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.542 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.803 00:16:36.803 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.803 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.803 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.803 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.803 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.803 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.803 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.064 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.064 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.064 { 00:16:37.064 "cntlid": 29, 00:16:37.064 "qid": 0, 00:16:37.064 "state": "enabled", 00:16:37.064 "thread": "nvmf_tgt_poll_group_000", 00:16:37.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:37.064 "listen_address": { 00:16:37.064 "trtype": "TCP", 00:16:37.064 "adrfam": "IPv4", 00:16:37.064 "traddr": "10.0.0.2", 00:16:37.064 "trsvcid": "4420" 00:16:37.064 }, 00:16:37.064 "peer_address": { 00:16:37.064 "trtype": "TCP", 00:16:37.064 "adrfam": "IPv4", 00:16:37.064 "traddr": "10.0.0.1", 00:16:37.064 "trsvcid": "57166" 00:16:37.064 }, 00:16:37.064 "auth": { 00:16:37.064 "state": "completed", 00:16:37.064 "digest": "sha256", 00:16:37.064 "dhgroup": "ffdhe4096" 00:16:37.064 } 00:16:37.064 } 00:16:37.064 ]' 00:16:37.064 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.064 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.064 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.064 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:37.064 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.064 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.064 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.064 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.327 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:16:37.327 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:16:37.900 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.900 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:37.900 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.900 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.900 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.900 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.900 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:37.900 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:38.160 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:38.160 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.160 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.160 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:38.160 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:38.160 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.160 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:38.160 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.160 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.160 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.160 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:38.160 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.160 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.421 00:16:38.421 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.421 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.421 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.421 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.421 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.421 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.421 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.421 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.421 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.421 { 00:16:38.421 "cntlid": 31, 00:16:38.421 "qid": 0, 00:16:38.421 "state": "enabled", 00:16:38.421 "thread": "nvmf_tgt_poll_group_000", 00:16:38.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:38.421 "listen_address": { 00:16:38.421 "trtype": "TCP", 00:16:38.421 "adrfam": "IPv4", 00:16:38.421 "traddr": "10.0.0.2", 00:16:38.421 "trsvcid": "4420" 00:16:38.421 }, 00:16:38.421 "peer_address": { 00:16:38.421 "trtype": "TCP", 00:16:38.421 "adrfam": "IPv4", 00:16:38.421 "traddr": "10.0.0.1", 00:16:38.421 "trsvcid": "57192" 00:16:38.421 }, 00:16:38.421 "auth": { 00:16:38.421 "state": "completed", 00:16:38.421 "digest": "sha256", 00:16:38.421 "dhgroup": "ffdhe4096" 00:16:38.421 } 00:16:38.421 } 00:16:38.421 ]' 00:16:38.421 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.683 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.683 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.683 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:38.683 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.683 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.683 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.683 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.944 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:16:38.944 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:16:39.514 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.514 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:39.514 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.514 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.515 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.515 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.515 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.515 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:39.515 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:39.774 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:39.774 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.775 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.775 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:39.775 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.775 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.775 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.775 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.775 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.775 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.775 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.775 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.775 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.034 00:16:40.034 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.034 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.034 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.295 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.295 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.295 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.295 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.295 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.295 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.295 { 00:16:40.295 "cntlid": 33, 00:16:40.295 "qid": 0, 00:16:40.295 "state": "enabled", 00:16:40.295 "thread": "nvmf_tgt_poll_group_000", 00:16:40.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:40.295 "listen_address": { 00:16:40.295 "trtype": "TCP", 00:16:40.295 "adrfam": "IPv4", 00:16:40.295 "traddr": "10.0.0.2", 00:16:40.295 "trsvcid": "4420" 00:16:40.295 }, 00:16:40.295 "peer_address": { 00:16:40.295 "trtype": "TCP", 00:16:40.295 "adrfam": "IPv4", 00:16:40.295 "traddr": "10.0.0.1", 00:16:40.295 "trsvcid": "57218" 00:16:40.295 }, 00:16:40.295 "auth": { 00:16:40.295 "state": "completed", 00:16:40.295 "digest": "sha256", 00:16:40.295 "dhgroup": "ffdhe6144" 00:16:40.295 } 00:16:40.295 } 00:16:40.295 ]' 00:16:40.295 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.295 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.295 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.295 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:40.295 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.295 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.295 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.295 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.554 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:16:40.554 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:16:41.124 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.124 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:41.124 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.124 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.125 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.125 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.125 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:41.125 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:41.385 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:41.385 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.385 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.385 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:41.385 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.385 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.385 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.385 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.385 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.385 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.385 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.385 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.385 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.645 00:16:41.645 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.645 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.645 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.905 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.905 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.905 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.905 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.905 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.905 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.905 { 00:16:41.905 "cntlid": 35, 00:16:41.905 "qid": 0, 00:16:41.905 "state": "enabled", 00:16:41.905 "thread": "nvmf_tgt_poll_group_000", 00:16:41.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:41.905 "listen_address": { 00:16:41.905 "trtype": "TCP", 00:16:41.905 "adrfam": "IPv4", 00:16:41.905 "traddr": "10.0.0.2", 00:16:41.905 "trsvcid": "4420" 00:16:41.905 }, 00:16:41.905 "peer_address": { 00:16:41.905 "trtype": "TCP", 00:16:41.905 "adrfam": "IPv4", 00:16:41.905 "traddr": "10.0.0.1", 00:16:41.905 "trsvcid": "57234" 00:16:41.905 }, 00:16:41.905 "auth": { 00:16:41.905 "state": "completed", 00:16:41.905 "digest": "sha256", 00:16:41.905 "dhgroup": "ffdhe6144" 00:16:41.905 } 00:16:41.905 } 00:16:41.905 ]' 00:16:41.905 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.905 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.905 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.905 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:41.905 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.166 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.166 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.166 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.166 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:16:42.166 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:16:43.104 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.104 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:43.104 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.104 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.104 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.104 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.104 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:43.104 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:43.104 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:43.104 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.104 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.104 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:43.104 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:43.105 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.105 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.105 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.105 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.105 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.105 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.105 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.105 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.365 00:16:43.365 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.365 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.365 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.627 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.627 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.627 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.627 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.627 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.627 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.627 { 00:16:43.627 "cntlid": 37, 00:16:43.627 "qid": 0, 00:16:43.627 "state": "enabled", 00:16:43.627 "thread": "nvmf_tgt_poll_group_000", 00:16:43.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:43.627 "listen_address": { 00:16:43.627 "trtype": "TCP", 00:16:43.627 "adrfam": "IPv4", 00:16:43.627 "traddr": "10.0.0.2", 00:16:43.627 "trsvcid": "4420" 00:16:43.627 }, 00:16:43.627 "peer_address": { 00:16:43.627 "trtype": "TCP", 00:16:43.627 "adrfam": "IPv4", 00:16:43.627 "traddr": "10.0.0.1", 00:16:43.627 "trsvcid": "57262" 00:16:43.627 }, 00:16:43.627 "auth": { 00:16:43.627 "state": "completed", 00:16:43.627 "digest": "sha256", 00:16:43.627 "dhgroup": "ffdhe6144" 00:16:43.627 } 00:16:43.627 } 00:16:43.627 ]' 00:16:43.627 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.627 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.627 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.627 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:43.627 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.887 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.887 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.887 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.887 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:16:43.888 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:16:44.831 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.831 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:44.831 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.831 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.831 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.831 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.831 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:44.831 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:44.831 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:44.831 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.831 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:44.831 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:44.831 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:44.831 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.831 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:44.831 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.831 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.831 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.831 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:44.831 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.831 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.092 00:16:45.092 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.092 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.092 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.353 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.353 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.353 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.354 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.354 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.354 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.354 { 00:16:45.354 "cntlid": 39, 00:16:45.354 "qid": 0, 00:16:45.354 "state": "enabled", 00:16:45.354 "thread": "nvmf_tgt_poll_group_000", 00:16:45.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:45.354 "listen_address": { 00:16:45.354 "trtype": "TCP", 00:16:45.354 "adrfam": "IPv4", 00:16:45.354 "traddr": "10.0.0.2", 00:16:45.354 "trsvcid": "4420" 00:16:45.354 }, 00:16:45.354 "peer_address": { 00:16:45.354 "trtype": "TCP", 00:16:45.354 "adrfam": "IPv4", 00:16:45.354 "traddr": "10.0.0.1", 00:16:45.354 "trsvcid": "57284" 00:16:45.354 }, 00:16:45.354 "auth": { 00:16:45.354 "state": "completed", 00:16:45.354 "digest": "sha256", 00:16:45.354 "dhgroup": "ffdhe6144" 00:16:45.354 } 00:16:45.354 } 00:16:45.354 ]' 00:16:45.354 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.354 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.354 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.354 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:45.354 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.354 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.354 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.354 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.615 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:16:45.615 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:16:46.184 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.184 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:46.184 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.184 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.444 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.444 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.444 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.444 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:46.444 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:46.444 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:46.444 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.444 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:46.444 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:46.444 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:46.444 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.444 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.444 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.444 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.444 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.444 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.444 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.444 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.015 00:16:47.015 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.015 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.015 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.275 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.275 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.275 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.275 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.275 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.275 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.275 { 00:16:47.275 "cntlid": 41, 00:16:47.275 "qid": 0, 00:16:47.275 "state": "enabled", 00:16:47.275 "thread": "nvmf_tgt_poll_group_000", 00:16:47.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:47.275 "listen_address": { 00:16:47.275 "trtype": "TCP", 00:16:47.275 "adrfam": "IPv4", 00:16:47.275 "traddr": "10.0.0.2", 00:16:47.275 "trsvcid": "4420" 00:16:47.275 }, 00:16:47.275 "peer_address": { 00:16:47.275 "trtype": "TCP", 00:16:47.275 "adrfam": "IPv4", 00:16:47.275 "traddr": "10.0.0.1", 00:16:47.275 "trsvcid": "46968" 00:16:47.275 }, 00:16:47.275 "auth": { 00:16:47.275 "state": "completed", 00:16:47.275 "digest": "sha256", 00:16:47.275 "dhgroup": "ffdhe8192" 00:16:47.275 } 00:16:47.275 } 00:16:47.275 ]' 00:16:47.275 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.275 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.275 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.275 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.275 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.275 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.275 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.275 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.534 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:16:47.534 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:16:48.104 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.104 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:48.104 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.104 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.104 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.104 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.104 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:48.104 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:48.364 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:48.364 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.364 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.364 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:48.364 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:48.364 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.364 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.364 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.364 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.364 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.364 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.364 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.364 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.939 00:16:48.939 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.939 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.939 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.939 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.939 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.939 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.939 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.939 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.939 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.939 { 00:16:48.939 "cntlid": 43, 00:16:48.939 "qid": 0, 00:16:48.939 "state": "enabled", 00:16:48.939 "thread": "nvmf_tgt_poll_group_000", 00:16:48.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:48.939 "listen_address": { 00:16:48.939 "trtype": "TCP", 00:16:48.939 "adrfam": "IPv4", 00:16:48.939 "traddr": "10.0.0.2", 00:16:48.939 "trsvcid": "4420" 00:16:48.939 }, 00:16:48.939 "peer_address": { 00:16:48.939 "trtype": "TCP", 00:16:48.939 "adrfam": "IPv4", 00:16:48.939 "traddr": "10.0.0.1", 00:16:48.939 "trsvcid": "46988" 00:16:48.939 }, 00:16:48.939 "auth": { 00:16:48.939 "state": "completed", 00:16:48.939 "digest": "sha256", 00:16:48.939 "dhgroup": "ffdhe8192" 00:16:48.939 } 00:16:48.939 } 00:16:48.939 ]' 00:16:48.939 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.245 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.245 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.246 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:49.246 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.246 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.246 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.246 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.246 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:16:49.246 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:16:49.894 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.894 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:49.894 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.894 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.894 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.894 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.894 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:49.894 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:50.168 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:50.168 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.168 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.168 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:50.168 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:50.168 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.168 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.168 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.168 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.168 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.168 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.168 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.168 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.749 00:16:50.749 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.749 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.749 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.749 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.749 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.749 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.749 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.749 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.749 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.749 { 00:16:50.749 "cntlid": 45, 00:16:50.749 "qid": 0, 00:16:50.749 "state": "enabled", 00:16:50.749 "thread": "nvmf_tgt_poll_group_000", 00:16:50.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:50.749 "listen_address": { 00:16:50.749 "trtype": "TCP", 00:16:50.749 "adrfam": "IPv4", 00:16:50.749 "traddr": "10.0.0.2", 00:16:50.749 "trsvcid": "4420" 00:16:50.749 }, 00:16:50.749 "peer_address": { 00:16:50.749 "trtype": "TCP", 00:16:50.749 "adrfam": "IPv4", 00:16:50.749 "traddr": "10.0.0.1", 00:16:50.749 "trsvcid": "47008" 00:16:50.749 }, 00:16:50.749 "auth": { 00:16:50.749 "state": "completed", 00:16:50.749 "digest": "sha256", 00:16:50.749 "dhgroup": "ffdhe8192" 00:16:50.749 } 00:16:50.749 } 00:16:50.749 ]' 00:16:50.749 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.010 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.010 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.010 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:51.010 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.010 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.010 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.010 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.271 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:16:51.271 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:16:51.843 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.843 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:51.843 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.843 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.843 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.843 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.843 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:51.843 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:52.104 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:52.104 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.104 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.104 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:52.104 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:52.104 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.104 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:52.104 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.104 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.104 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.104 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:52.104 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.104 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.365 00:16:52.365 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.365 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.365 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.626 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.626 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.626 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.626 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.626 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.626 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.626 { 00:16:52.626 "cntlid": 47, 00:16:52.626 "qid": 0, 00:16:52.626 "state": "enabled", 00:16:52.626 "thread": "nvmf_tgt_poll_group_000", 00:16:52.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:52.626 "listen_address": { 00:16:52.626 "trtype": "TCP", 00:16:52.626 "adrfam": "IPv4", 00:16:52.626 "traddr": "10.0.0.2", 00:16:52.626 "trsvcid": "4420" 00:16:52.626 }, 00:16:52.626 "peer_address": { 00:16:52.626 "trtype": "TCP", 00:16:52.626 "adrfam": "IPv4", 00:16:52.626 "traddr": "10.0.0.1", 00:16:52.626 "trsvcid": "47032" 00:16:52.626 }, 00:16:52.626 "auth": { 00:16:52.626 "state": "completed", 00:16:52.626 "digest": "sha256", 00:16:52.626 "dhgroup": "ffdhe8192" 00:16:52.626 } 00:16:52.626 } 00:16:52.626 ]' 00:16:52.626 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.626 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.626 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.626 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:52.626 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.888 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.888 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.888 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.888 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:16:52.888 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:16:53.459 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.459 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:53.459 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.459 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.459 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.459 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:53.459 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.459 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.459 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:53.459 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:53.720 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:53.720 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.720 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:53.720 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:53.720 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:53.720 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.720 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.720 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.720 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.720 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.720 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.720 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.720 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.981 00:16:53.981 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.981 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.981 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.242 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.242 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.242 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.242 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.242 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.242 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.242 { 00:16:54.242 "cntlid": 49, 00:16:54.242 "qid": 0, 00:16:54.242 "state": "enabled", 00:16:54.242 "thread": "nvmf_tgt_poll_group_000", 00:16:54.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:54.242 "listen_address": { 00:16:54.242 "trtype": "TCP", 00:16:54.242 "adrfam": "IPv4", 00:16:54.242 "traddr": "10.0.0.2", 00:16:54.242 "trsvcid": "4420" 00:16:54.242 }, 00:16:54.242 "peer_address": { 00:16:54.242 "trtype": "TCP", 00:16:54.242 "adrfam": "IPv4", 00:16:54.242 "traddr": "10.0.0.1", 00:16:54.242 "trsvcid": "47064" 00:16:54.242 }, 00:16:54.242 "auth": { 00:16:54.242 "state": "completed", 00:16:54.242 "digest": "sha384", 00:16:54.242 "dhgroup": "null" 00:16:54.242 } 00:16:54.242 } 00:16:54.242 ]' 00:16:54.242 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.242 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.242 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.242 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:54.242 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.242 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.242 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.242 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.502 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:16:54.502 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:16:55.070 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.070 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:55.070 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.070 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.070 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.070 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.070 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:55.070 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:55.331 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:55.331 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.331 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:55.331 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:55.331 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:55.331 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.331 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.331 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.331 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.331 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.331 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.331 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.331 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.591 00:16:55.591 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.591 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.591 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.852 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.852 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.852 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.852 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.852 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.852 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.852 { 00:16:55.852 "cntlid": 51, 00:16:55.852 "qid": 0, 00:16:55.852 "state": "enabled", 00:16:55.852 "thread": "nvmf_tgt_poll_group_000", 00:16:55.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:55.852 "listen_address": { 00:16:55.852 "trtype": "TCP", 00:16:55.852 "adrfam": "IPv4", 00:16:55.852 "traddr": "10.0.0.2", 00:16:55.852 "trsvcid": "4420" 00:16:55.852 }, 00:16:55.852 "peer_address": { 00:16:55.852 "trtype": "TCP", 00:16:55.852 "adrfam": "IPv4", 00:16:55.852 "traddr": "10.0.0.1", 00:16:55.852 "trsvcid": "47080" 00:16:55.852 }, 00:16:55.852 "auth": { 00:16:55.852 "state": "completed", 00:16:55.852 "digest": "sha384", 00:16:55.852 "dhgroup": "null" 00:16:55.852 } 00:16:55.852 } 00:16:55.852 ]' 00:16:55.852 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.852 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.852 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.852 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:55.852 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.852 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.852 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.852 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.128 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:16:56.128 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:16:56.702 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.702 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.702 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.702 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.702 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.702 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.702 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:56.702 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:56.962 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:56.962 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.962 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:56.962 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:56.962 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:56.962 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.962 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.962 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.962 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.962 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.962 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.962 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.962 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.962 00:16:57.222 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.222 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.222 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.222 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.222 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.222 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.222 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.222 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.222 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.222 { 00:16:57.222 "cntlid": 53, 00:16:57.222 "qid": 0, 00:16:57.222 "state": "enabled", 00:16:57.222 "thread": "nvmf_tgt_poll_group_000", 00:16:57.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:57.222 "listen_address": { 00:16:57.222 "trtype": "TCP", 00:16:57.222 "adrfam": "IPv4", 00:16:57.222 "traddr": "10.0.0.2", 00:16:57.222 "trsvcid": "4420" 00:16:57.222 }, 00:16:57.222 "peer_address": { 00:16:57.222 "trtype": "TCP", 00:16:57.222 "adrfam": "IPv4", 00:16:57.222 "traddr": "10.0.0.1", 00:16:57.222 "trsvcid": "37906" 00:16:57.223 }, 00:16:57.223 "auth": { 00:16:57.223 "state": "completed", 00:16:57.223 "digest": "sha384", 00:16:57.223 "dhgroup": "null" 00:16:57.223 } 00:16:57.223 } 00:16:57.223 ]' 00:16:57.223 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.223 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.223 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.482 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:57.482 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.482 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.482 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.482 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.482 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:16:57.482 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:16:58.423 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.423 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:58.423 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.423 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.423 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.423 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.423 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:58.423 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:58.423 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:58.423 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.423 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:58.423 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:58.423 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:58.423 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.423 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:58.423 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.423 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.423 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.423 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:58.423 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.423 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.684 00:16:58.684 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.684 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.684 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.684 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.684 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.684 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.684 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.944 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.944 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.944 { 00:16:58.944 "cntlid": 55, 00:16:58.944 "qid": 0, 00:16:58.944 "state": "enabled", 00:16:58.944 "thread": "nvmf_tgt_poll_group_000", 00:16:58.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:58.944 "listen_address": { 00:16:58.944 "trtype": "TCP", 00:16:58.944 "adrfam": "IPv4", 00:16:58.944 "traddr": "10.0.0.2", 00:16:58.944 "trsvcid": "4420" 00:16:58.944 }, 00:16:58.944 "peer_address": { 00:16:58.944 "trtype": "TCP", 00:16:58.944 "adrfam": "IPv4", 00:16:58.944 "traddr": "10.0.0.1", 00:16:58.944 "trsvcid": "37926" 00:16:58.944 }, 00:16:58.944 "auth": { 00:16:58.944 "state": "completed", 00:16:58.944 "digest": "sha384", 00:16:58.944 "dhgroup": "null" 00:16:58.944 } 00:16:58.944 } 00:16:58.944 ]' 00:16:58.944 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.944 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.944 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.944 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:58.944 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.944 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.944 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.944 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.205 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:16:59.205 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:16:59.775 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.775 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:59.775 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.775 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.775 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.775 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.775 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.775 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:59.775 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:00.035 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:00.035 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.035 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.035 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:00.035 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:00.035 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.035 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.035 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.035 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.035 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.035 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.036 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.036 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.296 00:17:00.296 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.296 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.296 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.296 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.296 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.296 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.296 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.296 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.296 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.296 { 00:17:00.296 "cntlid": 57, 00:17:00.296 "qid": 0, 00:17:00.296 "state": "enabled", 00:17:00.296 "thread": "nvmf_tgt_poll_group_000", 00:17:00.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:00.296 "listen_address": { 00:17:00.296 "trtype": "TCP", 00:17:00.296 "adrfam": "IPv4", 00:17:00.296 "traddr": "10.0.0.2", 00:17:00.296 "trsvcid": "4420" 00:17:00.296 }, 00:17:00.296 "peer_address": { 00:17:00.296 "trtype": "TCP", 00:17:00.296 "adrfam": "IPv4", 00:17:00.296 "traddr": "10.0.0.1", 00:17:00.296 "trsvcid": "37946" 00:17:00.296 }, 00:17:00.296 "auth": { 00:17:00.296 "state": "completed", 00:17:00.296 "digest": "sha384", 00:17:00.296 "dhgroup": "ffdhe2048" 00:17:00.296 } 00:17:00.296 } 00:17:00.296 ]' 00:17:00.296 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.296 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.296 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.557 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:00.557 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.557 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.557 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.557 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.557 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:17:00.557 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:17:01.497 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.497 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:01.497 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.498 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.498 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.498 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.498 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:01.498 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:01.498 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:01.498 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.498 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:01.498 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:01.498 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:01.498 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.498 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.498 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.498 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.498 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.498 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.498 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.498 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.758 00:17:01.758 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.758 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.758 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.018 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.018 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.018 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.018 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.018 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.018 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.018 { 00:17:02.018 "cntlid": 59, 00:17:02.018 "qid": 0, 00:17:02.018 "state": "enabled", 00:17:02.018 "thread": "nvmf_tgt_poll_group_000", 00:17:02.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:02.018 "listen_address": { 00:17:02.018 "trtype": "TCP", 00:17:02.018 "adrfam": "IPv4", 00:17:02.018 "traddr": "10.0.0.2", 00:17:02.018 "trsvcid": "4420" 00:17:02.018 }, 00:17:02.018 "peer_address": { 00:17:02.018 "trtype": "TCP", 00:17:02.018 "adrfam": "IPv4", 00:17:02.018 "traddr": "10.0.0.1", 00:17:02.018 "trsvcid": "37980" 00:17:02.018 }, 00:17:02.018 "auth": { 00:17:02.018 "state": "completed", 00:17:02.018 "digest": "sha384", 00:17:02.018 "dhgroup": "ffdhe2048" 00:17:02.018 } 00:17:02.018 } 00:17:02.018 ]' 00:17:02.018 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.018 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.018 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.018 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:02.018 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.018 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.018 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.018 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.279 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:17:02.279 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:17:02.849 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.849 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:02.849 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.849 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.849 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.849 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.849 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:02.849 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:03.110 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:03.110 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.110 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:03.110 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:03.110 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:03.110 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.110 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.110 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.110 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.110 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.110 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.110 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.110 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.370 00:17:03.370 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.370 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.370 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.631 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.631 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.631 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.631 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.631 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.631 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.631 { 00:17:03.631 "cntlid": 61, 00:17:03.631 "qid": 0, 00:17:03.631 "state": "enabled", 00:17:03.631 "thread": "nvmf_tgt_poll_group_000", 00:17:03.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:03.631 "listen_address": { 00:17:03.631 "trtype": "TCP", 00:17:03.631 "adrfam": "IPv4", 00:17:03.631 "traddr": "10.0.0.2", 00:17:03.631 "trsvcid": "4420" 00:17:03.631 }, 00:17:03.631 "peer_address": { 00:17:03.631 "trtype": "TCP", 00:17:03.631 "adrfam": "IPv4", 00:17:03.631 "traddr": "10.0.0.1", 00:17:03.631 "trsvcid": "38006" 00:17:03.631 }, 00:17:03.631 "auth": { 00:17:03.631 "state": "completed", 00:17:03.631 "digest": "sha384", 00:17:03.631 "dhgroup": "ffdhe2048" 00:17:03.631 } 00:17:03.631 } 00:17:03.631 ]' 00:17:03.631 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.631 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.631 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.631 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:03.631 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.631 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.631 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.631 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.900 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:17:03.900 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:17:04.476 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.476 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:04.476 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.476 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.477 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.477 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.477 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:04.477 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:04.737 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:04.737 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.737 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:04.737 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:04.737 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:04.737 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.737 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:04.737 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.737 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.737 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.737 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:04.737 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.737 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.997 00:17:04.997 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.997 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.997 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.257 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.257 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.257 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.257 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.257 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.257 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.257 { 00:17:05.257 "cntlid": 63, 00:17:05.257 "qid": 0, 00:17:05.257 "state": "enabled", 00:17:05.257 "thread": "nvmf_tgt_poll_group_000", 00:17:05.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:05.257 "listen_address": { 00:17:05.257 "trtype": "TCP", 00:17:05.257 "adrfam": "IPv4", 00:17:05.257 "traddr": "10.0.0.2", 00:17:05.257 "trsvcid": "4420" 00:17:05.257 }, 00:17:05.257 "peer_address": { 00:17:05.257 "trtype": "TCP", 00:17:05.257 "adrfam": "IPv4", 00:17:05.257 "traddr": "10.0.0.1", 00:17:05.257 "trsvcid": "38026" 00:17:05.257 }, 00:17:05.257 "auth": { 00:17:05.257 "state": "completed", 00:17:05.257 "digest": "sha384", 00:17:05.257 "dhgroup": "ffdhe2048" 00:17:05.257 } 00:17:05.257 } 00:17:05.257 ]' 00:17:05.257 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.257 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.257 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.257 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:05.257 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.257 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.257 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.257 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.518 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:17:05.518 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:17:06.088 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.088 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:06.088 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.088 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.088 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.088 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.088 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.088 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:06.088 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:06.349 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:06.349 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.349 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:06.349 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:06.349 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:06.349 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.349 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.349 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.349 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.349 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.349 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.349 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.349 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.609 00:17:06.609 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.609 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.609 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.609 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.609 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.609 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.609 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.609 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.609 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.609 { 00:17:06.609 "cntlid": 65, 00:17:06.609 "qid": 0, 00:17:06.609 "state": "enabled", 00:17:06.609 "thread": "nvmf_tgt_poll_group_000", 00:17:06.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:06.609 "listen_address": { 00:17:06.609 "trtype": "TCP", 00:17:06.609 "adrfam": "IPv4", 00:17:06.609 "traddr": "10.0.0.2", 00:17:06.609 "trsvcid": "4420" 00:17:06.609 }, 00:17:06.609 "peer_address": { 00:17:06.609 "trtype": "TCP", 00:17:06.609 "adrfam": "IPv4", 00:17:06.609 "traddr": "10.0.0.1", 00:17:06.609 "trsvcid": "45586" 00:17:06.609 }, 00:17:06.609 "auth": { 00:17:06.609 "state": "completed", 00:17:06.609 "digest": "sha384", 00:17:06.609 "dhgroup": "ffdhe3072" 00:17:06.609 } 00:17:06.609 } 00:17:06.609 ]' 00:17:06.609 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.869 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.869 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.869 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:06.869 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.869 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.869 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.869 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.129 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:17:07.129 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:17:07.698 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.698 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:07.698 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.698 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.698 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.698 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.698 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:07.698 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:07.698 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:07.698 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.698 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:07.698 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:07.698 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:07.698 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.698 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.698 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.698 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.958 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.958 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.958 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.958 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.958 00:17:07.958 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.958 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.958 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.218 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.218 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.218 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.218 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.218 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.218 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.218 { 00:17:08.218 "cntlid": 67, 00:17:08.218 "qid": 0, 00:17:08.218 "state": "enabled", 00:17:08.218 "thread": "nvmf_tgt_poll_group_000", 00:17:08.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:08.218 "listen_address": { 00:17:08.218 "trtype": "TCP", 00:17:08.218 "adrfam": "IPv4", 00:17:08.218 "traddr": "10.0.0.2", 00:17:08.218 "trsvcid": "4420" 00:17:08.218 }, 00:17:08.218 "peer_address": { 00:17:08.218 "trtype": "TCP", 00:17:08.218 "adrfam": "IPv4", 00:17:08.218 "traddr": "10.0.0.1", 00:17:08.218 "trsvcid": "45598" 00:17:08.218 }, 00:17:08.218 "auth": { 00:17:08.218 "state": "completed", 00:17:08.218 "digest": "sha384", 00:17:08.218 "dhgroup": "ffdhe3072" 00:17:08.218 } 00:17:08.218 } 00:17:08.218 ]' 00:17:08.218 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.218 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.218 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.218 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:08.218 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.479 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.479 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.479 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.479 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:17:08.479 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:17:09.048 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.048 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:09.048 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.048 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.048 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.048 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.048 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:09.048 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:09.309 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:09.309 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.309 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:09.309 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:09.309 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:09.309 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.309 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.309 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.309 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.309 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.309 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.309 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.309 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.570 00:17:09.570 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.570 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.570 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.830 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.830 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.830 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.830 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.830 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.830 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.830 { 00:17:09.830 "cntlid": 69, 00:17:09.830 "qid": 0, 00:17:09.830 "state": "enabled", 00:17:09.830 "thread": "nvmf_tgt_poll_group_000", 00:17:09.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:09.830 "listen_address": { 00:17:09.830 "trtype": "TCP", 00:17:09.830 "adrfam": "IPv4", 00:17:09.830 "traddr": "10.0.0.2", 00:17:09.830 "trsvcid": "4420" 00:17:09.830 }, 00:17:09.830 "peer_address": { 00:17:09.830 "trtype": "TCP", 00:17:09.830 "adrfam": "IPv4", 00:17:09.830 "traddr": "10.0.0.1", 00:17:09.830 "trsvcid": "45626" 00:17:09.830 }, 00:17:09.830 "auth": { 00:17:09.830 "state": "completed", 00:17:09.830 "digest": "sha384", 00:17:09.830 "dhgroup": "ffdhe3072" 00:17:09.830 } 00:17:09.830 } 00:17:09.830 ]' 00:17:09.830 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.830 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.830 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.830 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:09.830 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.830 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.830 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.830 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.091 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:17:10.091 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:17:10.663 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.663 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:10.663 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.663 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.663 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.663 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.663 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:10.663 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:10.925 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:10.925 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.925 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:10.925 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:10.925 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:10.925 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.925 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:10.925 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.925 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.925 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.925 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:10.925 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.925 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.187 00:17:11.187 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.187 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.187 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.448 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.448 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.448 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.448 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.448 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.448 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.448 { 00:17:11.448 "cntlid": 71, 00:17:11.448 "qid": 0, 00:17:11.448 "state": "enabled", 00:17:11.448 "thread": "nvmf_tgt_poll_group_000", 00:17:11.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:11.448 "listen_address": { 00:17:11.448 "trtype": "TCP", 00:17:11.448 "adrfam": "IPv4", 00:17:11.448 "traddr": "10.0.0.2", 00:17:11.448 "trsvcid": "4420" 00:17:11.448 }, 00:17:11.448 "peer_address": { 00:17:11.448 "trtype": "TCP", 00:17:11.448 "adrfam": "IPv4", 00:17:11.448 "traddr": "10.0.0.1", 00:17:11.448 "trsvcid": "45656" 00:17:11.448 }, 00:17:11.448 "auth": { 00:17:11.448 "state": "completed", 00:17:11.448 "digest": "sha384", 00:17:11.448 "dhgroup": "ffdhe3072" 00:17:11.448 } 00:17:11.448 } 00:17:11.448 ]' 00:17:11.448 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.448 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.448 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.448 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:11.448 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.448 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.448 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.448 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.708 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:17:11.708 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:17:12.277 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.278 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:12.278 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.278 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.278 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.278 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.278 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.278 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:12.278 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:12.538 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:12.538 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.538 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:12.538 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:12.538 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:12.538 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.538 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.538 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.538 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.538 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.538 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.538 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.539 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.799 00:17:12.799 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.799 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.799 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.060 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.060 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.060 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.060 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.060 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.060 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.060 { 00:17:13.060 "cntlid": 73, 00:17:13.060 "qid": 0, 00:17:13.060 "state": "enabled", 00:17:13.060 "thread": "nvmf_tgt_poll_group_000", 00:17:13.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:13.060 "listen_address": { 00:17:13.060 "trtype": "TCP", 00:17:13.060 "adrfam": "IPv4", 00:17:13.060 "traddr": "10.0.0.2", 00:17:13.060 "trsvcid": "4420" 00:17:13.060 }, 00:17:13.060 "peer_address": { 00:17:13.060 "trtype": "TCP", 00:17:13.060 "adrfam": "IPv4", 00:17:13.060 "traddr": "10.0.0.1", 00:17:13.060 "trsvcid": "45684" 00:17:13.060 }, 00:17:13.060 "auth": { 00:17:13.060 "state": "completed", 00:17:13.060 "digest": "sha384", 00:17:13.060 "dhgroup": "ffdhe4096" 00:17:13.060 } 00:17:13.060 } 00:17:13.060 ]' 00:17:13.060 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.060 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.060 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.060 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:13.060 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.060 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.060 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.060 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.321 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:17:13.321 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:17:13.891 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.891 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:13.891 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.891 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.891 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.891 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.891 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:13.891 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:14.152 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:14.152 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.152 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.152 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:14.152 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:14.152 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.152 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.152 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.152 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.152 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.152 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.152 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.152 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.412 00:17:14.413 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.413 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.413 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.413 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.413 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.413 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.413 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.413 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.413 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.413 { 00:17:14.413 "cntlid": 75, 00:17:14.413 "qid": 0, 00:17:14.413 "state": "enabled", 00:17:14.413 "thread": "nvmf_tgt_poll_group_000", 00:17:14.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:14.413 "listen_address": { 00:17:14.413 "trtype": "TCP", 00:17:14.413 "adrfam": "IPv4", 00:17:14.413 "traddr": "10.0.0.2", 00:17:14.413 "trsvcid": "4420" 00:17:14.413 }, 00:17:14.413 "peer_address": { 00:17:14.413 "trtype": "TCP", 00:17:14.413 "adrfam": "IPv4", 00:17:14.413 "traddr": "10.0.0.1", 00:17:14.413 "trsvcid": "45710" 00:17:14.413 }, 00:17:14.413 "auth": { 00:17:14.413 "state": "completed", 00:17:14.413 "digest": "sha384", 00:17:14.413 "dhgroup": "ffdhe4096" 00:17:14.413 } 00:17:14.413 } 00:17:14.413 ]' 00:17:14.413 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.674 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.674 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.674 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:14.674 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.674 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.674 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.674 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.934 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:17:14.934 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:17:15.505 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.505 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:15.505 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.505 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.505 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.505 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.505 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:15.505 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:15.505 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:15.505 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.505 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.505 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:15.505 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:15.505 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.505 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.505 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.505 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.505 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.505 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.505 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.505 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.765 00:17:15.765 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.765 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.765 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.025 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.025 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.025 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.025 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.025 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.025 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.026 { 00:17:16.026 "cntlid": 77, 00:17:16.026 "qid": 0, 00:17:16.026 "state": "enabled", 00:17:16.026 "thread": "nvmf_tgt_poll_group_000", 00:17:16.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:16.026 "listen_address": { 00:17:16.026 "trtype": "TCP", 00:17:16.026 "adrfam": "IPv4", 00:17:16.026 "traddr": "10.0.0.2", 00:17:16.026 "trsvcid": "4420" 00:17:16.026 }, 00:17:16.026 "peer_address": { 00:17:16.026 "trtype": "TCP", 00:17:16.026 "adrfam": "IPv4", 00:17:16.026 "traddr": "10.0.0.1", 00:17:16.026 "trsvcid": "35474" 00:17:16.026 }, 00:17:16.026 "auth": { 00:17:16.026 "state": "completed", 00:17:16.026 "digest": "sha384", 00:17:16.026 "dhgroup": "ffdhe4096" 00:17:16.026 } 00:17:16.026 } 00:17:16.026 ]' 00:17:16.026 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.026 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.026 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.286 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:16.286 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.286 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.286 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.286 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.286 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:17:16.286 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:17:17.226 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.226 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:17.226 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.226 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.226 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.226 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.226 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:17.226 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:17.226 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:17.226 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.226 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:17.226 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:17.226 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:17.226 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.226 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:17.226 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.226 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.226 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.226 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:17.226 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.226 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.484 00:17:17.484 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.484 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.484 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.743 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.743 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.743 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.743 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.743 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.743 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.743 { 00:17:17.743 "cntlid": 79, 00:17:17.743 "qid": 0, 00:17:17.743 "state": "enabled", 00:17:17.743 "thread": "nvmf_tgt_poll_group_000", 00:17:17.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:17.743 "listen_address": { 00:17:17.743 "trtype": "TCP", 00:17:17.743 "adrfam": "IPv4", 00:17:17.743 "traddr": "10.0.0.2", 00:17:17.743 "trsvcid": "4420" 00:17:17.743 }, 00:17:17.743 "peer_address": { 00:17:17.743 "trtype": "TCP", 00:17:17.743 "adrfam": "IPv4", 00:17:17.743 "traddr": "10.0.0.1", 00:17:17.743 "trsvcid": "35510" 00:17:17.743 }, 00:17:17.743 "auth": { 00:17:17.743 "state": "completed", 00:17:17.743 "digest": "sha384", 00:17:17.743 "dhgroup": "ffdhe4096" 00:17:17.743 } 00:17:17.743 } 00:17:17.743 ]' 00:17:17.743 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.743 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.743 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.743 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:17.743 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.743 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.743 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.743 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.002 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:17:18.002 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:17:18.569 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.569 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:18.569 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.569 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.569 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.569 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.569 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.569 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:18.569 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:18.828 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:18.828 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.828 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.829 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:18.829 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:18.829 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.829 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.829 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.829 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.829 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.829 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.829 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.829 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.088 00:17:19.347 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.347 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.347 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.347 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.347 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.347 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.347 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.347 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.347 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.347 { 00:17:19.347 "cntlid": 81, 00:17:19.347 "qid": 0, 00:17:19.347 "state": "enabled", 00:17:19.347 "thread": "nvmf_tgt_poll_group_000", 00:17:19.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:19.347 "listen_address": { 00:17:19.347 "trtype": "TCP", 00:17:19.347 "adrfam": "IPv4", 00:17:19.347 "traddr": "10.0.0.2", 00:17:19.347 "trsvcid": "4420" 00:17:19.347 }, 00:17:19.347 "peer_address": { 00:17:19.347 "trtype": "TCP", 00:17:19.347 "adrfam": "IPv4", 00:17:19.347 "traddr": "10.0.0.1", 00:17:19.347 "trsvcid": "35524" 00:17:19.347 }, 00:17:19.347 "auth": { 00:17:19.347 "state": "completed", 00:17:19.347 "digest": "sha384", 00:17:19.347 "dhgroup": "ffdhe6144" 00:17:19.347 } 00:17:19.347 } 00:17:19.347 ]' 00:17:19.347 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.347 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.347 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.607 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:19.607 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.607 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.607 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.607 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.607 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:17:19.607 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:17:20.544 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.544 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.544 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.544 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.544 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.544 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.544 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:20.544 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:20.544 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:20.544 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.544 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.544 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:20.544 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:20.544 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.544 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.544 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.544 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.544 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.544 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.544 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.544 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.803 00:17:20.803 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.803 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.803 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.062 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.062 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.062 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.062 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.062 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.062 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.062 { 00:17:21.062 "cntlid": 83, 00:17:21.062 "qid": 0, 00:17:21.062 "state": "enabled", 00:17:21.062 "thread": "nvmf_tgt_poll_group_000", 00:17:21.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:21.062 "listen_address": { 00:17:21.062 "trtype": "TCP", 00:17:21.062 "adrfam": "IPv4", 00:17:21.062 "traddr": "10.0.0.2", 00:17:21.062 "trsvcid": "4420" 00:17:21.062 }, 00:17:21.062 "peer_address": { 00:17:21.062 "trtype": "TCP", 00:17:21.062 "adrfam": "IPv4", 00:17:21.062 "traddr": "10.0.0.1", 00:17:21.062 "trsvcid": "35556" 00:17:21.062 }, 00:17:21.062 "auth": { 00:17:21.062 "state": "completed", 00:17:21.062 "digest": "sha384", 00:17:21.062 "dhgroup": "ffdhe6144" 00:17:21.062 } 00:17:21.062 } 00:17:21.062 ]' 00:17:21.062 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.062 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.062 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.321 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:21.321 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.321 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.321 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.321 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.321 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:17:21.321 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:17:22.258 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.258 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:22.258 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.258 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.258 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.258 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.258 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:22.258 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:22.258 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:22.258 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.258 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:22.258 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:22.258 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:22.258 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.258 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.258 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.258 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.258 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.258 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.258 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.258 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.518 00:17:22.518 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.518 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.518 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.777 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.777 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.777 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.777 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.777 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.777 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.777 { 00:17:22.777 "cntlid": 85, 00:17:22.777 "qid": 0, 00:17:22.777 "state": "enabled", 00:17:22.777 "thread": "nvmf_tgt_poll_group_000", 00:17:22.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:22.777 "listen_address": { 00:17:22.777 "trtype": "TCP", 00:17:22.777 "adrfam": "IPv4", 00:17:22.777 "traddr": "10.0.0.2", 00:17:22.777 "trsvcid": "4420" 00:17:22.777 }, 00:17:22.777 "peer_address": { 00:17:22.777 "trtype": "TCP", 00:17:22.777 "adrfam": "IPv4", 00:17:22.777 "traddr": "10.0.0.1", 00:17:22.777 "trsvcid": "35590" 00:17:22.777 }, 00:17:22.777 "auth": { 00:17:22.777 "state": "completed", 00:17:22.777 "digest": "sha384", 00:17:22.777 "dhgroup": "ffdhe6144" 00:17:22.777 } 00:17:22.777 } 00:17:22.777 ]' 00:17:22.777 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.777 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.777 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.777 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:22.777 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.037 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.037 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.037 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.037 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:17:23.037 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:17:23.976 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.976 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.976 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.976 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.976 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.976 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.976 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:23.976 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:23.976 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:23.976 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.976 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.976 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:23.976 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:23.976 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.976 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:23.976 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.976 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.976 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.976 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.976 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.976 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.235 00:17:24.235 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.235 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.235 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.496 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.496 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.496 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.496 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.496 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.496 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.496 { 00:17:24.496 "cntlid": 87, 00:17:24.496 "qid": 0, 00:17:24.496 "state": "enabled", 00:17:24.496 "thread": "nvmf_tgt_poll_group_000", 00:17:24.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:24.496 "listen_address": { 00:17:24.496 "trtype": "TCP", 00:17:24.496 "adrfam": "IPv4", 00:17:24.496 "traddr": "10.0.0.2", 00:17:24.496 "trsvcid": "4420" 00:17:24.496 }, 00:17:24.496 "peer_address": { 00:17:24.496 "trtype": "TCP", 00:17:24.496 "adrfam": "IPv4", 00:17:24.496 "traddr": "10.0.0.1", 00:17:24.496 "trsvcid": "35608" 00:17:24.496 }, 00:17:24.496 "auth": { 00:17:24.496 "state": "completed", 00:17:24.496 "digest": "sha384", 00:17:24.496 "dhgroup": "ffdhe6144" 00:17:24.496 } 00:17:24.496 } 00:17:24.496 ]' 00:17:24.496 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.496 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.496 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.496 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:24.496 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.756 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.756 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.756 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.756 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:17:24.756 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:17:25.327 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.587 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:25.587 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.587 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.587 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.587 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.587 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.587 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.587 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.587 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:25.587 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.587 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.587 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:25.587 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:25.587 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.587 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.587 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.587 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.587 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.587 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.587 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.587 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.158 00:17:26.158 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.158 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.158 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.419 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.419 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.419 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.419 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.419 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.419 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.419 { 00:17:26.419 "cntlid": 89, 00:17:26.419 "qid": 0, 00:17:26.419 "state": "enabled", 00:17:26.419 "thread": "nvmf_tgt_poll_group_000", 00:17:26.419 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:26.419 "listen_address": { 00:17:26.419 "trtype": "TCP", 00:17:26.419 "adrfam": "IPv4", 00:17:26.419 "traddr": "10.0.0.2", 00:17:26.419 "trsvcid": "4420" 00:17:26.419 }, 00:17:26.419 "peer_address": { 00:17:26.419 "trtype": "TCP", 00:17:26.419 "adrfam": "IPv4", 00:17:26.419 "traddr": "10.0.0.1", 00:17:26.419 "trsvcid": "47270" 00:17:26.419 }, 00:17:26.419 "auth": { 00:17:26.419 "state": "completed", 00:17:26.419 "digest": "sha384", 00:17:26.419 "dhgroup": "ffdhe8192" 00:17:26.419 } 00:17:26.419 } 00:17:26.419 ]' 00:17:26.419 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.419 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.419 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.419 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:26.419 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.419 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.419 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.419 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.680 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:17:26.681 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:17:27.253 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.253 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:27.253 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.253 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.253 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.253 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.253 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:27.253 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:27.513 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:27.513 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.513 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:27.513 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:27.513 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:27.513 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.513 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.513 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.513 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.513 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.513 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.513 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.513 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.083 00:17:28.083 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.083 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.083 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.083 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.083 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.083 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.083 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.343 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.343 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.343 { 00:17:28.343 "cntlid": 91, 00:17:28.343 "qid": 0, 00:17:28.343 "state": "enabled", 00:17:28.343 "thread": "nvmf_tgt_poll_group_000", 00:17:28.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:28.343 "listen_address": { 00:17:28.343 "trtype": "TCP", 00:17:28.343 "adrfam": "IPv4", 00:17:28.343 "traddr": "10.0.0.2", 00:17:28.343 "trsvcid": "4420" 00:17:28.343 }, 00:17:28.343 "peer_address": { 00:17:28.343 "trtype": "TCP", 00:17:28.343 "adrfam": "IPv4", 00:17:28.343 "traddr": "10.0.0.1", 00:17:28.343 "trsvcid": "47310" 00:17:28.343 }, 00:17:28.343 "auth": { 00:17:28.343 "state": "completed", 00:17:28.343 "digest": "sha384", 00:17:28.343 "dhgroup": "ffdhe8192" 00:17:28.343 } 00:17:28.343 } 00:17:28.343 ]' 00:17:28.343 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.343 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.343 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.343 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:28.343 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.343 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.343 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.343 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.603 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:17:28.603 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:17:29.173 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.173 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:29.173 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.173 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.173 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.173 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.173 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:29.173 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:29.433 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:29.433 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.433 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:29.433 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:29.433 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:29.433 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.433 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.433 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.433 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.433 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.433 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.433 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.434 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.693 00:17:29.954 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.954 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.954 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.954 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.954 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.954 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.954 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.954 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.954 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.954 { 00:17:29.954 "cntlid": 93, 00:17:29.954 "qid": 0, 00:17:29.954 "state": "enabled", 00:17:29.954 "thread": "nvmf_tgt_poll_group_000", 00:17:29.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:29.954 "listen_address": { 00:17:29.954 "trtype": "TCP", 00:17:29.954 "adrfam": "IPv4", 00:17:29.954 "traddr": "10.0.0.2", 00:17:29.954 "trsvcid": "4420" 00:17:29.954 }, 00:17:29.954 "peer_address": { 00:17:29.954 "trtype": "TCP", 00:17:29.954 "adrfam": "IPv4", 00:17:29.954 "traddr": "10.0.0.1", 00:17:29.954 "trsvcid": "47328" 00:17:29.954 }, 00:17:29.954 "auth": { 00:17:29.954 "state": "completed", 00:17:29.954 "digest": "sha384", 00:17:29.954 "dhgroup": "ffdhe8192" 00:17:29.954 } 00:17:29.954 } 00:17:29.954 ]' 00:17:29.954 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.954 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.954 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.214 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:30.214 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.214 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.214 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.214 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.214 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:17:30.214 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:17:31.154 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.154 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:31.154 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.154 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.155 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.155 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.155 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:31.155 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:31.155 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:31.155 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.155 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.155 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:31.155 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:31.155 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.155 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:31.155 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.155 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.155 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.155 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:31.155 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.155 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.725 00:17:31.725 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.725 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.725 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.725 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.725 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.725 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.725 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.725 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.725 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.725 { 00:17:31.725 "cntlid": 95, 00:17:31.725 "qid": 0, 00:17:31.725 "state": "enabled", 00:17:31.725 "thread": "nvmf_tgt_poll_group_000", 00:17:31.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:31.725 "listen_address": { 00:17:31.725 "trtype": "TCP", 00:17:31.725 "adrfam": "IPv4", 00:17:31.725 "traddr": "10.0.0.2", 00:17:31.725 "trsvcid": "4420" 00:17:31.725 }, 00:17:31.725 "peer_address": { 00:17:31.725 "trtype": "TCP", 00:17:31.725 "adrfam": "IPv4", 00:17:31.725 "traddr": "10.0.0.1", 00:17:31.725 "trsvcid": "47350" 00:17:31.725 }, 00:17:31.725 "auth": { 00:17:31.725 "state": "completed", 00:17:31.725 "digest": "sha384", 00:17:31.725 "dhgroup": "ffdhe8192" 00:17:31.725 } 00:17:31.725 } 00:17:31.725 ]' 00:17:31.725 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.985 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.985 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.985 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:31.985 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.985 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.985 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.986 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.246 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:17:32.246 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:17:32.817 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.817 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.817 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.817 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.817 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.817 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:32.817 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.817 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.817 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:32.817 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:33.077 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:33.077 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.077 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.077 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:33.077 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:33.077 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.077 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.077 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.077 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.077 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.077 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.077 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.077 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.077 00:17:33.337 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.337 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.337 11:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.337 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.337 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.337 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.337 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.337 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.337 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.337 { 00:17:33.337 "cntlid": 97, 00:17:33.337 "qid": 0, 00:17:33.337 "state": "enabled", 00:17:33.337 "thread": "nvmf_tgt_poll_group_000", 00:17:33.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:33.337 "listen_address": { 00:17:33.337 "trtype": "TCP", 00:17:33.337 "adrfam": "IPv4", 00:17:33.337 "traddr": "10.0.0.2", 00:17:33.337 "trsvcid": "4420" 00:17:33.337 }, 00:17:33.337 "peer_address": { 00:17:33.337 "trtype": "TCP", 00:17:33.337 "adrfam": "IPv4", 00:17:33.337 "traddr": "10.0.0.1", 00:17:33.337 "trsvcid": "47388" 00:17:33.337 }, 00:17:33.337 "auth": { 00:17:33.337 "state": "completed", 00:17:33.337 "digest": "sha512", 00:17:33.337 "dhgroup": "null" 00:17:33.337 } 00:17:33.337 } 00:17:33.337 ]' 00:17:33.337 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.337 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.337 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.597 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:33.597 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.597 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.597 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.597 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.597 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:17:33.597 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:17:34.536 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.536 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:34.536 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.536 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.536 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.536 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.536 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:34.536 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:34.536 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:34.536 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.536 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:34.536 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:34.536 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:34.536 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.536 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.536 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.536 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.536 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.536 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.536 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.536 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.797 00:17:34.797 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.797 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.797 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.057 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.057 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.057 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.057 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.057 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.057 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.057 { 00:17:35.057 "cntlid": 99, 00:17:35.057 "qid": 0, 00:17:35.057 "state": "enabled", 00:17:35.057 "thread": "nvmf_tgt_poll_group_000", 00:17:35.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:35.057 "listen_address": { 00:17:35.057 "trtype": "TCP", 00:17:35.057 "adrfam": "IPv4", 00:17:35.057 "traddr": "10.0.0.2", 00:17:35.057 "trsvcid": "4420" 00:17:35.057 }, 00:17:35.057 "peer_address": { 00:17:35.057 "trtype": "TCP", 00:17:35.057 "adrfam": "IPv4", 00:17:35.057 "traddr": "10.0.0.1", 00:17:35.057 "trsvcid": "47416" 00:17:35.057 }, 00:17:35.057 "auth": { 00:17:35.057 "state": "completed", 00:17:35.057 "digest": "sha512", 00:17:35.057 "dhgroup": "null" 00:17:35.057 } 00:17:35.057 } 00:17:35.057 ]' 00:17:35.057 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.057 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.057 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.057 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:35.057 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.057 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.057 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.057 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.317 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:17:35.317 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:17:35.887 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.887 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.887 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.887 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.887 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.887 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.887 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:35.887 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:36.148 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:36.148 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.148 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:36.148 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:36.148 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:36.148 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.148 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.148 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.148 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.148 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.148 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.148 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.148 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.408 00:17:36.408 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.408 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.408 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.668 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.668 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.668 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.668 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.668 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.668 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.668 { 00:17:36.668 "cntlid": 101, 00:17:36.668 "qid": 0, 00:17:36.668 "state": "enabled", 00:17:36.668 "thread": "nvmf_tgt_poll_group_000", 00:17:36.668 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:36.668 "listen_address": { 00:17:36.668 "trtype": "TCP", 00:17:36.668 "adrfam": "IPv4", 00:17:36.668 "traddr": "10.0.0.2", 00:17:36.668 "trsvcid": "4420" 00:17:36.668 }, 00:17:36.668 "peer_address": { 00:17:36.668 "trtype": "TCP", 00:17:36.668 "adrfam": "IPv4", 00:17:36.668 "traddr": "10.0.0.1", 00:17:36.668 "trsvcid": "35992" 00:17:36.668 }, 00:17:36.668 "auth": { 00:17:36.668 "state": "completed", 00:17:36.668 "digest": "sha512", 00:17:36.668 "dhgroup": "null" 00:17:36.668 } 00:17:36.668 } 00:17:36.668 ]' 00:17:36.668 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.668 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.668 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.668 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:36.668 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.668 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.668 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.668 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.928 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:17:36.928 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:17:37.498 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.498 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:37.498 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.498 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.498 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.498 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.498 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:37.498 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:37.758 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:37.758 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.758 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:37.758 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:37.758 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:37.758 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.758 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:37.758 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.758 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.758 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.758 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:37.758 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.758 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:38.019 00:17:38.019 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.019 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.019 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.279 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.279 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.280 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.280 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.280 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.280 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.280 { 00:17:38.280 "cntlid": 103, 00:17:38.280 "qid": 0, 00:17:38.280 "state": "enabled", 00:17:38.280 "thread": "nvmf_tgt_poll_group_000", 00:17:38.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:38.280 "listen_address": { 00:17:38.280 "trtype": "TCP", 00:17:38.280 "adrfam": "IPv4", 00:17:38.280 "traddr": "10.0.0.2", 00:17:38.280 "trsvcid": "4420" 00:17:38.280 }, 00:17:38.280 "peer_address": { 00:17:38.280 "trtype": "TCP", 00:17:38.280 "adrfam": "IPv4", 00:17:38.280 "traddr": "10.0.0.1", 00:17:38.280 "trsvcid": "36026" 00:17:38.280 }, 00:17:38.280 "auth": { 00:17:38.280 "state": "completed", 00:17:38.280 "digest": "sha512", 00:17:38.280 "dhgroup": "null" 00:17:38.280 } 00:17:38.280 } 00:17:38.280 ]' 00:17:38.280 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.280 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.280 11:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.280 11:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:38.280 11:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.280 11:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.280 11:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.280 11:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.540 11:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:17:38.540 11:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:17:39.110 11:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.110 11:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.110 11:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.110 11:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.110 11:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.110 11:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.110 11:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.110 11:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:39.110 11:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:39.371 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:39.371 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.371 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:39.371 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:39.371 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:39.371 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.371 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.371 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.371 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.371 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.371 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.371 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.371 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.631 00:17:39.631 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.631 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.631 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.631 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.631 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.631 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.631 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.892 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.892 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.892 { 00:17:39.892 "cntlid": 105, 00:17:39.892 "qid": 0, 00:17:39.892 "state": "enabled", 00:17:39.892 "thread": "nvmf_tgt_poll_group_000", 00:17:39.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:39.892 "listen_address": { 00:17:39.892 "trtype": "TCP", 00:17:39.892 "adrfam": "IPv4", 00:17:39.892 "traddr": "10.0.0.2", 00:17:39.892 "trsvcid": "4420" 00:17:39.892 }, 00:17:39.892 "peer_address": { 00:17:39.892 "trtype": "TCP", 00:17:39.892 "adrfam": "IPv4", 00:17:39.892 "traddr": "10.0.0.1", 00:17:39.892 "trsvcid": "36058" 00:17:39.892 }, 00:17:39.892 "auth": { 00:17:39.892 "state": "completed", 00:17:39.892 "digest": "sha512", 00:17:39.892 "dhgroup": "ffdhe2048" 00:17:39.892 } 00:17:39.892 } 00:17:39.892 ]' 00:17:39.892 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.892 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.892 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.892 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:39.892 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.892 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.892 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.892 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.152 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:17:40.152 11:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:17:40.723 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.723 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.723 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.723 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.723 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.723 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.723 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:40.723 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:40.983 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:40.983 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.983 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:40.983 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:40.983 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:40.983 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.983 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.983 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.983 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.983 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.983 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.983 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.983 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.243 00:17:41.243 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.243 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.243 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.243 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.243 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.243 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.243 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.243 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.243 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.243 { 00:17:41.243 "cntlid": 107, 00:17:41.243 "qid": 0, 00:17:41.243 "state": "enabled", 00:17:41.243 "thread": "nvmf_tgt_poll_group_000", 00:17:41.243 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:41.243 "listen_address": { 00:17:41.243 "trtype": "TCP", 00:17:41.243 "adrfam": "IPv4", 00:17:41.243 "traddr": "10.0.0.2", 00:17:41.243 "trsvcid": "4420" 00:17:41.243 }, 00:17:41.243 "peer_address": { 00:17:41.243 "trtype": "TCP", 00:17:41.243 "adrfam": "IPv4", 00:17:41.243 "traddr": "10.0.0.1", 00:17:41.243 "trsvcid": "36090" 00:17:41.243 }, 00:17:41.243 "auth": { 00:17:41.243 "state": "completed", 00:17:41.243 "digest": "sha512", 00:17:41.243 "dhgroup": "ffdhe2048" 00:17:41.243 } 00:17:41.243 } 00:17:41.243 ]' 00:17:41.243 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.243 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.243 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.503 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:41.503 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.503 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.503 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.503 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.503 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:17:41.503 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:17:42.445 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.445 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.445 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.445 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.445 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.445 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.445 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:42.445 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:42.445 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:42.445 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.445 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:42.445 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:42.445 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:42.445 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.445 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.445 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.445 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.445 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.445 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.445 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.445 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.705 00:17:42.705 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.705 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.705 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.964 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.964 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.964 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.964 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.964 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.964 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.964 { 00:17:42.964 "cntlid": 109, 00:17:42.964 "qid": 0, 00:17:42.964 "state": "enabled", 00:17:42.964 "thread": "nvmf_tgt_poll_group_000", 00:17:42.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:42.964 "listen_address": { 00:17:42.964 "trtype": "TCP", 00:17:42.964 "adrfam": "IPv4", 00:17:42.964 "traddr": "10.0.0.2", 00:17:42.964 "trsvcid": "4420" 00:17:42.964 }, 00:17:42.964 "peer_address": { 00:17:42.964 "trtype": "TCP", 00:17:42.964 "adrfam": "IPv4", 00:17:42.964 "traddr": "10.0.0.1", 00:17:42.964 "trsvcid": "36124" 00:17:42.964 }, 00:17:42.964 "auth": { 00:17:42.964 "state": "completed", 00:17:42.964 "digest": "sha512", 00:17:42.964 "dhgroup": "ffdhe2048" 00:17:42.964 } 00:17:42.964 } 00:17:42.964 ]' 00:17:42.964 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.964 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.964 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.964 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:42.964 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.964 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.964 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.965 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.225 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:17:43.225 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:17:43.795 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.795 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:43.795 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.795 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.795 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.795 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.795 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:43.795 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:44.055 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:44.055 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.055 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:44.055 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:44.055 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:44.055 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.055 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:44.055 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.055 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.055 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.055 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:44.055 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.055 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.315 00:17:44.315 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.315 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.315 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.576 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.576 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.576 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.576 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.576 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.576 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.576 { 00:17:44.576 "cntlid": 111, 00:17:44.576 "qid": 0, 00:17:44.576 "state": "enabled", 00:17:44.576 "thread": "nvmf_tgt_poll_group_000", 00:17:44.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:44.576 "listen_address": { 00:17:44.576 "trtype": "TCP", 00:17:44.576 "adrfam": "IPv4", 00:17:44.576 "traddr": "10.0.0.2", 00:17:44.576 "trsvcid": "4420" 00:17:44.576 }, 00:17:44.576 "peer_address": { 00:17:44.576 "trtype": "TCP", 00:17:44.576 "adrfam": "IPv4", 00:17:44.576 "traddr": "10.0.0.1", 00:17:44.576 "trsvcid": "36152" 00:17:44.576 }, 00:17:44.576 "auth": { 00:17:44.576 "state": "completed", 00:17:44.576 "digest": "sha512", 00:17:44.576 "dhgroup": "ffdhe2048" 00:17:44.576 } 00:17:44.576 } 00:17:44.576 ]' 00:17:44.576 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.576 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.576 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.576 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:44.576 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.576 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.576 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.576 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.836 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:17:44.836 11:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:17:45.406 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.406 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.406 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.406 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.406 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.406 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.406 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.406 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:45.406 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:45.666 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:45.666 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.666 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.666 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:45.666 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:45.666 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.666 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.666 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.666 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.666 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.666 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.666 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.666 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.927 00:17:45.927 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.927 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.927 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.927 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.187 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.187 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.187 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.187 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.187 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.187 { 00:17:46.187 "cntlid": 113, 00:17:46.187 "qid": 0, 00:17:46.187 "state": "enabled", 00:17:46.187 "thread": "nvmf_tgt_poll_group_000", 00:17:46.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:46.187 "listen_address": { 00:17:46.187 "trtype": "TCP", 00:17:46.187 "adrfam": "IPv4", 00:17:46.187 "traddr": "10.0.0.2", 00:17:46.187 "trsvcid": "4420" 00:17:46.187 }, 00:17:46.187 "peer_address": { 00:17:46.187 "trtype": "TCP", 00:17:46.187 "adrfam": "IPv4", 00:17:46.187 "traddr": "10.0.0.1", 00:17:46.187 "trsvcid": "45332" 00:17:46.187 }, 00:17:46.187 "auth": { 00:17:46.187 "state": "completed", 00:17:46.187 "digest": "sha512", 00:17:46.187 "dhgroup": "ffdhe3072" 00:17:46.187 } 00:17:46.187 } 00:17:46.187 ]' 00:17:46.187 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.187 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.187 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.187 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:46.187 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.187 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.187 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.187 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.447 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:17:46.447 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:17:47.152 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.152 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:47.152 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.152 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.152 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.152 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.152 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:47.152 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:47.152 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:47.152 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.152 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.152 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:47.152 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:47.152 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.152 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.152 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.152 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.152 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.152 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.152 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.152 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.464 00:17:47.464 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.464 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.464 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.744 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.744 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.744 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.744 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.744 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.744 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.744 { 00:17:47.744 "cntlid": 115, 00:17:47.744 "qid": 0, 00:17:47.744 "state": "enabled", 00:17:47.744 "thread": "nvmf_tgt_poll_group_000", 00:17:47.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:47.744 "listen_address": { 00:17:47.744 "trtype": "TCP", 00:17:47.744 "adrfam": "IPv4", 00:17:47.744 "traddr": "10.0.0.2", 00:17:47.744 "trsvcid": "4420" 00:17:47.744 }, 00:17:47.744 "peer_address": { 00:17:47.744 "trtype": "TCP", 00:17:47.744 "adrfam": "IPv4", 00:17:47.744 "traddr": "10.0.0.1", 00:17:47.744 "trsvcid": "45370" 00:17:47.744 }, 00:17:47.744 "auth": { 00:17:47.744 "state": "completed", 00:17:47.744 "digest": "sha512", 00:17:47.744 "dhgroup": "ffdhe3072" 00:17:47.744 } 00:17:47.744 } 00:17:47.744 ]' 00:17:47.744 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.744 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.744 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.744 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:47.744 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.744 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.744 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.744 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.004 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:17:48.004 11:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:17:48.574 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.574 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.574 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.574 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.574 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.574 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.574 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:48.574 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:48.834 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:48.834 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.834 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:48.834 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:48.834 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:48.834 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.834 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.834 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.834 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.834 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.834 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.835 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.835 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.095 00:17:49.095 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.095 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.095 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.095 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.355 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.355 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.355 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.355 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.355 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.355 { 00:17:49.355 "cntlid": 117, 00:17:49.355 "qid": 0, 00:17:49.355 "state": "enabled", 00:17:49.355 "thread": "nvmf_tgt_poll_group_000", 00:17:49.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:49.355 "listen_address": { 00:17:49.355 "trtype": "TCP", 00:17:49.355 "adrfam": "IPv4", 00:17:49.355 "traddr": "10.0.0.2", 00:17:49.355 "trsvcid": "4420" 00:17:49.355 }, 00:17:49.355 "peer_address": { 00:17:49.355 "trtype": "TCP", 00:17:49.355 "adrfam": "IPv4", 00:17:49.355 "traddr": "10.0.0.1", 00:17:49.355 "trsvcid": "45388" 00:17:49.355 }, 00:17:49.355 "auth": { 00:17:49.355 "state": "completed", 00:17:49.355 "digest": "sha512", 00:17:49.355 "dhgroup": "ffdhe3072" 00:17:49.355 } 00:17:49.355 } 00:17:49.355 ]' 00:17:49.355 11:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.355 11:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.355 11:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.355 11:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:49.355 11:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.355 11:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.355 11:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.355 11:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.622 11:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:17:49.622 11:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:17:50.190 11:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.190 11:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.190 11:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.190 11:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.190 11:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.190 11:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.190 11:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:50.190 11:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:50.450 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:50.450 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.450 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.450 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:50.450 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:50.450 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.450 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:50.450 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.450 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.450 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.450 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:50.450 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.450 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.710 00:17:50.710 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.710 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.710 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.710 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.710 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.710 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.710 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.710 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.710 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.710 { 00:17:50.710 "cntlid": 119, 00:17:50.710 "qid": 0, 00:17:50.710 "state": "enabled", 00:17:50.710 "thread": "nvmf_tgt_poll_group_000", 00:17:50.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:50.710 "listen_address": { 00:17:50.710 "trtype": "TCP", 00:17:50.710 "adrfam": "IPv4", 00:17:50.710 "traddr": "10.0.0.2", 00:17:50.710 "trsvcid": "4420" 00:17:50.710 }, 00:17:50.710 "peer_address": { 00:17:50.710 "trtype": "TCP", 00:17:50.710 "adrfam": "IPv4", 00:17:50.710 "traddr": "10.0.0.1", 00:17:50.710 "trsvcid": "45406" 00:17:50.710 }, 00:17:50.710 "auth": { 00:17:50.710 "state": "completed", 00:17:50.710 "digest": "sha512", 00:17:50.710 "dhgroup": "ffdhe3072" 00:17:50.710 } 00:17:50.710 } 00:17:50.710 ]' 00:17:50.710 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.969 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.969 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.969 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:50.969 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.970 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.970 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.970 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.229 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:17:51.229 11:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:17:51.799 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.799 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.799 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.799 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.799 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.799 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.799 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.799 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:51.799 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:52.059 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:52.059 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.059 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.059 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:52.059 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:52.059 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.059 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.059 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.059 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.059 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.059 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.059 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.059 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.318 00:17:52.318 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.318 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.318 11:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.318 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.318 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.318 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.318 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.318 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.318 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.318 { 00:17:52.318 "cntlid": 121, 00:17:52.318 "qid": 0, 00:17:52.318 "state": "enabled", 00:17:52.318 "thread": "nvmf_tgt_poll_group_000", 00:17:52.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:52.318 "listen_address": { 00:17:52.318 "trtype": "TCP", 00:17:52.318 "adrfam": "IPv4", 00:17:52.318 "traddr": "10.0.0.2", 00:17:52.318 "trsvcid": "4420" 00:17:52.318 }, 00:17:52.318 "peer_address": { 00:17:52.318 "trtype": "TCP", 00:17:52.318 "adrfam": "IPv4", 00:17:52.318 "traddr": "10.0.0.1", 00:17:52.318 "trsvcid": "45434" 00:17:52.318 }, 00:17:52.318 "auth": { 00:17:52.318 "state": "completed", 00:17:52.319 "digest": "sha512", 00:17:52.319 "dhgroup": "ffdhe4096" 00:17:52.319 } 00:17:52.319 } 00:17:52.319 ]' 00:17:52.319 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.578 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.578 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.578 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:52.578 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.578 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.578 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.578 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.839 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:17:52.840 11:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:17:53.412 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.412 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.412 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.412 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.412 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.412 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.412 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:53.412 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:53.672 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:53.672 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.672 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.672 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:53.672 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:53.672 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.672 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.672 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.672 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.672 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.672 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.672 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.672 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.933 00:17:53.933 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.933 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.933 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.933 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.933 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.933 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.933 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.933 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.933 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.933 { 00:17:53.933 "cntlid": 123, 00:17:53.933 "qid": 0, 00:17:53.933 "state": "enabled", 00:17:53.933 "thread": "nvmf_tgt_poll_group_000", 00:17:53.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:53.933 "listen_address": { 00:17:53.933 "trtype": "TCP", 00:17:53.933 "adrfam": "IPv4", 00:17:53.933 "traddr": "10.0.0.2", 00:17:53.933 "trsvcid": "4420" 00:17:53.933 }, 00:17:53.933 "peer_address": { 00:17:53.933 "trtype": "TCP", 00:17:53.933 "adrfam": "IPv4", 00:17:53.933 "traddr": "10.0.0.1", 00:17:53.933 "trsvcid": "45456" 00:17:53.933 }, 00:17:53.933 "auth": { 00:17:53.933 "state": "completed", 00:17:53.933 "digest": "sha512", 00:17:53.933 "dhgroup": "ffdhe4096" 00:17:53.933 } 00:17:53.933 } 00:17:53.933 ]' 00:17:53.933 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.193 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.193 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.193 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:54.193 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.193 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.193 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.193 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.453 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:17:54.453 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:17:55.023 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.023 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.023 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.023 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.023 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.023 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.023 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:55.023 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:55.284 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:55.284 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.284 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.284 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:55.284 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:55.284 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.284 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.284 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.284 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.284 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.284 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.284 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.284 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.545 00:17:55.545 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.545 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.545 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.545 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.545 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.545 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.545 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.806 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.806 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.806 { 00:17:55.806 "cntlid": 125, 00:17:55.806 "qid": 0, 00:17:55.806 "state": "enabled", 00:17:55.806 "thread": "nvmf_tgt_poll_group_000", 00:17:55.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:55.806 "listen_address": { 00:17:55.806 "trtype": "TCP", 00:17:55.806 "adrfam": "IPv4", 00:17:55.806 "traddr": "10.0.0.2", 00:17:55.806 "trsvcid": "4420" 00:17:55.806 }, 00:17:55.806 "peer_address": { 00:17:55.806 "trtype": "TCP", 00:17:55.806 "adrfam": "IPv4", 00:17:55.806 "traddr": "10.0.0.1", 00:17:55.806 "trsvcid": "45476" 00:17:55.806 }, 00:17:55.806 "auth": { 00:17:55.806 "state": "completed", 00:17:55.806 "digest": "sha512", 00:17:55.806 "dhgroup": "ffdhe4096" 00:17:55.806 } 00:17:55.806 } 00:17:55.806 ]' 00:17:55.806 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.806 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.806 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.806 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:55.806 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.806 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.806 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.806 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.067 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:17:56.067 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:17:56.639 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.639 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:56.639 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.639 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.639 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.639 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.639 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:56.639 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:56.899 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:56.899 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.899 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.899 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:56.899 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:56.899 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.899 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:56.899 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.899 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.899 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.899 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:56.899 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.899 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.159 00:17:57.159 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.159 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.159 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.159 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.159 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.159 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.159 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.159 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.159 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.159 { 00:17:57.159 "cntlid": 127, 00:17:57.159 "qid": 0, 00:17:57.159 "state": "enabled", 00:17:57.159 "thread": "nvmf_tgt_poll_group_000", 00:17:57.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:57.159 "listen_address": { 00:17:57.159 "trtype": "TCP", 00:17:57.159 "adrfam": "IPv4", 00:17:57.159 "traddr": "10.0.0.2", 00:17:57.159 "trsvcid": "4420" 00:17:57.159 }, 00:17:57.159 "peer_address": { 00:17:57.159 "trtype": "TCP", 00:17:57.159 "adrfam": "IPv4", 00:17:57.159 "traddr": "10.0.0.1", 00:17:57.159 "trsvcid": "38830" 00:17:57.159 }, 00:17:57.159 "auth": { 00:17:57.159 "state": "completed", 00:17:57.159 "digest": "sha512", 00:17:57.159 "dhgroup": "ffdhe4096" 00:17:57.159 } 00:17:57.159 } 00:17:57.159 ]' 00:17:57.159 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.420 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.420 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.420 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:57.420 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.420 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.421 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.421 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.682 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:17:57.682 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:17:58.252 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.252 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.252 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.252 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.252 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.252 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.252 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.252 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:58.252 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:58.252 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:58.252 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.252 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.252 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:58.252 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:58.252 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.252 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.253 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.253 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.253 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.253 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.253 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.253 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.513 00:17:58.774 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.774 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.774 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.774 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.774 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.774 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.774 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.774 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.774 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.774 { 00:17:58.774 "cntlid": 129, 00:17:58.774 "qid": 0, 00:17:58.774 "state": "enabled", 00:17:58.774 "thread": "nvmf_tgt_poll_group_000", 00:17:58.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:58.774 "listen_address": { 00:17:58.774 "trtype": "TCP", 00:17:58.774 "adrfam": "IPv4", 00:17:58.774 "traddr": "10.0.0.2", 00:17:58.774 "trsvcid": "4420" 00:17:58.774 }, 00:17:58.774 "peer_address": { 00:17:58.774 "trtype": "TCP", 00:17:58.774 "adrfam": "IPv4", 00:17:58.774 "traddr": "10.0.0.1", 00:17:58.774 "trsvcid": "38860" 00:17:58.774 }, 00:17:58.774 "auth": { 00:17:58.774 "state": "completed", 00:17:58.774 "digest": "sha512", 00:17:58.774 "dhgroup": "ffdhe6144" 00:17:58.774 } 00:17:58.774 } 00:17:58.774 ]' 00:17:58.774 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.774 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.774 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.035 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:59.035 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.035 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.035 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.035 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.035 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:17:59.035 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:17:59.978 11:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.978 11:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.978 11:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.978 11:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.978 11:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.978 11:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.978 11:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:59.978 11:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:59.978 11:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:59.978 11:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.978 11:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.978 11:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:59.978 11:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:59.978 11:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.978 11:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.978 11:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.978 11:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.978 11:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.978 11:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.978 11:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.978 11:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.238 00:18:00.238 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.238 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.238 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.498 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.498 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.498 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.498 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.498 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.498 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.498 { 00:18:00.498 "cntlid": 131, 00:18:00.498 "qid": 0, 00:18:00.498 "state": "enabled", 00:18:00.498 "thread": "nvmf_tgt_poll_group_000", 00:18:00.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:00.498 "listen_address": { 00:18:00.498 "trtype": "TCP", 00:18:00.498 "adrfam": "IPv4", 00:18:00.498 "traddr": "10.0.0.2", 00:18:00.498 "trsvcid": "4420" 00:18:00.498 }, 00:18:00.498 "peer_address": { 00:18:00.498 "trtype": "TCP", 00:18:00.498 "adrfam": "IPv4", 00:18:00.498 "traddr": "10.0.0.1", 00:18:00.498 "trsvcid": "38890" 00:18:00.498 }, 00:18:00.498 "auth": { 00:18:00.498 "state": "completed", 00:18:00.498 "digest": "sha512", 00:18:00.498 "dhgroup": "ffdhe6144" 00:18:00.498 } 00:18:00.498 } 00:18:00.498 ]' 00:18:00.498 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.498 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.498 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.498 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:00.498 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.758 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.758 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.758 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.758 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:18:00.758 11:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:18:01.700 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.700 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.700 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.700 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.700 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.700 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.700 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:01.700 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:01.700 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:01.700 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.700 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:01.700 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:01.700 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:01.700 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.700 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.700 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.700 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.700 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.700 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.700 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.700 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.962 00:18:01.962 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.962 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.962 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.223 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.223 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.223 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.223 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.223 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.223 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.223 { 00:18:02.223 "cntlid": 133, 00:18:02.223 "qid": 0, 00:18:02.223 "state": "enabled", 00:18:02.223 "thread": "nvmf_tgt_poll_group_000", 00:18:02.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:02.223 "listen_address": { 00:18:02.223 "trtype": "TCP", 00:18:02.223 "adrfam": "IPv4", 00:18:02.223 "traddr": "10.0.0.2", 00:18:02.223 "trsvcid": "4420" 00:18:02.223 }, 00:18:02.223 "peer_address": { 00:18:02.223 "trtype": "TCP", 00:18:02.223 "adrfam": "IPv4", 00:18:02.223 "traddr": "10.0.0.1", 00:18:02.223 "trsvcid": "38920" 00:18:02.223 }, 00:18:02.223 "auth": { 00:18:02.223 "state": "completed", 00:18:02.223 "digest": "sha512", 00:18:02.223 "dhgroup": "ffdhe6144" 00:18:02.223 } 00:18:02.223 } 00:18:02.223 ]' 00:18:02.223 11:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.223 11:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.223 11:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.223 11:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:02.223 11:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.223 11:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.223 11:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.223 11:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.484 11:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:18:02.484 11:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:18:03.055 11:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.055 11:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.055 11:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.055 11:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.316 11:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.316 11:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.316 11:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:03.316 11:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:03.316 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:03.316 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.316 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:03.316 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:03.316 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:03.316 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.316 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:03.316 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.316 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.316 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.316 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:03.316 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.316 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.576 00:18:03.837 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.837 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.837 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.837 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.837 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.837 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.837 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.837 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.837 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.837 { 00:18:03.837 "cntlid": 135, 00:18:03.837 "qid": 0, 00:18:03.837 "state": "enabled", 00:18:03.837 "thread": "nvmf_tgt_poll_group_000", 00:18:03.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:03.837 "listen_address": { 00:18:03.837 "trtype": "TCP", 00:18:03.837 "adrfam": "IPv4", 00:18:03.837 "traddr": "10.0.0.2", 00:18:03.837 "trsvcid": "4420" 00:18:03.837 }, 00:18:03.837 "peer_address": { 00:18:03.837 "trtype": "TCP", 00:18:03.837 "adrfam": "IPv4", 00:18:03.837 "traddr": "10.0.0.1", 00:18:03.837 "trsvcid": "38952" 00:18:03.837 }, 00:18:03.837 "auth": { 00:18:03.837 "state": "completed", 00:18:03.837 "digest": "sha512", 00:18:03.837 "dhgroup": "ffdhe6144" 00:18:03.837 } 00:18:03.837 } 00:18:03.837 ]' 00:18:03.837 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.837 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.837 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.098 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:04.098 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.098 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.098 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.098 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.098 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:18:04.365 11:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:18:04.626 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.626 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.626 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.886 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.886 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.886 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.886 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.886 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:04.886 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:04.886 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:04.886 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.886 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.886 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:04.886 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:04.886 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.886 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.886 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.886 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.886 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.886 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.886 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.886 11:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.456 00:18:05.456 11:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.456 11:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.456 11:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.716 11:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.716 11:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.716 11:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.716 11:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.716 11:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.716 11:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.716 { 00:18:05.716 "cntlid": 137, 00:18:05.716 "qid": 0, 00:18:05.716 "state": "enabled", 00:18:05.716 "thread": "nvmf_tgt_poll_group_000", 00:18:05.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:05.716 "listen_address": { 00:18:05.716 "trtype": "TCP", 00:18:05.716 "adrfam": "IPv4", 00:18:05.716 "traddr": "10.0.0.2", 00:18:05.716 "trsvcid": "4420" 00:18:05.716 }, 00:18:05.716 "peer_address": { 00:18:05.716 "trtype": "TCP", 00:18:05.716 "adrfam": "IPv4", 00:18:05.716 "traddr": "10.0.0.1", 00:18:05.716 "trsvcid": "38972" 00:18:05.716 }, 00:18:05.716 "auth": { 00:18:05.716 "state": "completed", 00:18:05.716 "digest": "sha512", 00:18:05.716 "dhgroup": "ffdhe8192" 00:18:05.716 } 00:18:05.716 } 00:18:05.716 ]' 00:18:05.716 11:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.716 11:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.716 11:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.716 11:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:05.716 11:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.716 11:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.716 11:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.716 11:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.976 11:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:18:05.976 11:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:18:06.547 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.547 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.547 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.547 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.547 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.547 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.547 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:06.547 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:06.547 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:06.547 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.547 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.547 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:06.547 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:06.547 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.547 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.547 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.547 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.808 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.808 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.808 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.808 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.068 00:18:07.068 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.068 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.068 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.330 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.330 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.330 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.330 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.330 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.330 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.330 { 00:18:07.330 "cntlid": 139, 00:18:07.330 "qid": 0, 00:18:07.330 "state": "enabled", 00:18:07.330 "thread": "nvmf_tgt_poll_group_000", 00:18:07.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:07.330 "listen_address": { 00:18:07.330 "trtype": "TCP", 00:18:07.330 "adrfam": "IPv4", 00:18:07.330 "traddr": "10.0.0.2", 00:18:07.330 "trsvcid": "4420" 00:18:07.330 }, 00:18:07.330 "peer_address": { 00:18:07.330 "trtype": "TCP", 00:18:07.330 "adrfam": "IPv4", 00:18:07.330 "traddr": "10.0.0.1", 00:18:07.330 "trsvcid": "39654" 00:18:07.330 }, 00:18:07.330 "auth": { 00:18:07.330 "state": "completed", 00:18:07.330 "digest": "sha512", 00:18:07.330 "dhgroup": "ffdhe8192" 00:18:07.330 } 00:18:07.330 } 00:18:07.330 ]' 00:18:07.330 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.331 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.331 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.331 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:07.331 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.598 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.598 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.598 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.599 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:18:07.599 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: --dhchap-ctrl-secret DHHC-1:02:MTUwNTE4NzVhNzIzMThjN2Y4YTBmZDFmOWNhYzNkYjk2NTQzNzIyODY0YjRkODM5o8p/kQ==: 00:18:08.538 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.538 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.538 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.538 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.539 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.539 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.539 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:08.539 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:08.539 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:08.539 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.539 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.539 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:08.539 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:08.539 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.539 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.539 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.539 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.539 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.539 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.539 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.539 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.109 00:18:09.109 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.109 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.109 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.109 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.109 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.109 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.109 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.109 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.109 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.109 { 00:18:09.109 "cntlid": 141, 00:18:09.109 "qid": 0, 00:18:09.109 "state": "enabled", 00:18:09.109 "thread": "nvmf_tgt_poll_group_000", 00:18:09.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:09.109 "listen_address": { 00:18:09.109 "trtype": "TCP", 00:18:09.109 "adrfam": "IPv4", 00:18:09.109 "traddr": "10.0.0.2", 00:18:09.109 "trsvcid": "4420" 00:18:09.109 }, 00:18:09.109 "peer_address": { 00:18:09.109 "trtype": "TCP", 00:18:09.109 "adrfam": "IPv4", 00:18:09.109 "traddr": "10.0.0.1", 00:18:09.109 "trsvcid": "39688" 00:18:09.109 }, 00:18:09.109 "auth": { 00:18:09.109 "state": "completed", 00:18:09.109 "digest": "sha512", 00:18:09.109 "dhgroup": "ffdhe8192" 00:18:09.109 } 00:18:09.109 } 00:18:09.109 ]' 00:18:09.109 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.109 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.109 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.369 11:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:09.369 11:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.369 11:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.369 11:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.369 11:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.629 11:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:18:09.629 11:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:01:NzMxOGRiZWVkZDk2ZjJiOWVjYWNiY2ZiODQyYTA3MjW/YjC9: 00:18:10.199 11:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.199 11:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.199 11:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.199 11:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.199 11:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.199 11:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.199 11:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:10.199 11:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:10.458 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:10.458 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.459 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:10.459 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:10.459 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:10.459 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.459 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:10.459 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.459 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.459 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.459 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:10.459 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.459 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.718 00:18:10.718 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.718 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.718 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.978 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.978 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.978 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.978 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.978 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.978 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.978 { 00:18:10.978 "cntlid": 143, 00:18:10.978 "qid": 0, 00:18:10.978 "state": "enabled", 00:18:10.978 "thread": "nvmf_tgt_poll_group_000", 00:18:10.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:10.978 "listen_address": { 00:18:10.978 "trtype": "TCP", 00:18:10.978 "adrfam": "IPv4", 00:18:10.978 "traddr": "10.0.0.2", 00:18:10.978 "trsvcid": "4420" 00:18:10.978 }, 00:18:10.978 "peer_address": { 00:18:10.978 "trtype": "TCP", 00:18:10.978 "adrfam": "IPv4", 00:18:10.978 "traddr": "10.0.0.1", 00:18:10.978 "trsvcid": "39710" 00:18:10.978 }, 00:18:10.978 "auth": { 00:18:10.978 "state": "completed", 00:18:10.978 "digest": "sha512", 00:18:10.978 "dhgroup": "ffdhe8192" 00:18:10.978 } 00:18:10.978 } 00:18:10.978 ]' 00:18:10.978 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.978 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.978 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.237 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:11.237 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.238 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.238 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.238 11:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.238 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:18:11.238 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.179 11:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.755 00:18:12.755 11:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.755 11:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.755 11:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.755 11:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.755 11:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.755 11:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.755 11:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.755 11:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.755 11:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.755 { 00:18:12.755 "cntlid": 145, 00:18:12.755 "qid": 0, 00:18:12.755 "state": "enabled", 00:18:12.755 "thread": "nvmf_tgt_poll_group_000", 00:18:12.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:12.755 "listen_address": { 00:18:12.755 "trtype": "TCP", 00:18:12.755 "adrfam": "IPv4", 00:18:12.755 "traddr": "10.0.0.2", 00:18:12.755 "trsvcid": "4420" 00:18:12.755 }, 00:18:12.755 "peer_address": { 00:18:12.755 "trtype": "TCP", 00:18:12.755 "adrfam": "IPv4", 00:18:12.755 "traddr": "10.0.0.1", 00:18:12.755 "trsvcid": "39742" 00:18:12.755 }, 00:18:12.755 "auth": { 00:18:12.755 "state": "completed", 00:18:12.755 "digest": "sha512", 00:18:12.755 "dhgroup": "ffdhe8192" 00:18:12.755 } 00:18:12.755 } 00:18:12.755 ]' 00:18:12.755 11:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.016 11:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.016 11:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.016 11:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:13.016 11:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.016 11:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.016 11:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.016 11:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.276 11:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:18:13.276 11:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2I5YzJmYzQ5OTY4ZDllOWRjYjYxZTMyZWUwZTE4N2Y5Y2ViN2U2NzMxZTRjZjMwk24Xnw==: --dhchap-ctrl-secret DHHC-1:03:ZWY3ZmZkMmNjNjFiNGY4NmViNGFhOGQzMGFlMTc0N2M1MzAwNmIwYzJkNGIyYmVlOTUwNmNiMWNhODI5MTk2Nw4JMOw=: 00:18:13.846 11:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.846 11:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.846 11:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.846 11:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.846 11:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.846 11:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:13.846 11:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.846 11:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.846 11:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.846 11:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:13.846 11:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:13.846 11:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:13.846 11:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:13.846 11:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.846 11:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:13.846 11:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.846 11:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:13.846 11:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:13.846 11:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:14.417 request: 00:18:14.417 { 00:18:14.417 "name": "nvme0", 00:18:14.417 "trtype": "tcp", 00:18:14.417 "traddr": "10.0.0.2", 00:18:14.417 "adrfam": "ipv4", 00:18:14.417 "trsvcid": "4420", 00:18:14.417 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:14.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:14.417 "prchk_reftag": false, 00:18:14.417 "prchk_guard": false, 00:18:14.417 "hdgst": false, 00:18:14.417 "ddgst": false, 00:18:14.417 "dhchap_key": "key2", 00:18:14.417 "allow_unrecognized_csi": false, 00:18:14.417 "method": "bdev_nvme_attach_controller", 00:18:14.417 "req_id": 1 00:18:14.417 } 00:18:14.417 Got JSON-RPC error response 00:18:14.417 response: 00:18:14.417 { 00:18:14.417 "code": -5, 00:18:14.417 "message": "Input/output error" 00:18:14.417 } 00:18:14.417 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:14.417 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.417 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:14.417 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.417 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:14.417 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.417 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.417 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.417 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.417 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.417 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.417 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.417 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:14.417 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:14.418 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:14.418 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:14.418 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.418 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:14.418 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.418 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:14.418 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:14.418 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:14.679 request: 00:18:14.679 { 00:18:14.679 "name": "nvme0", 00:18:14.679 "trtype": "tcp", 00:18:14.679 "traddr": "10.0.0.2", 00:18:14.679 "adrfam": "ipv4", 00:18:14.679 "trsvcid": "4420", 00:18:14.679 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:14.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:14.679 "prchk_reftag": false, 00:18:14.679 "prchk_guard": false, 00:18:14.679 "hdgst": false, 00:18:14.679 "ddgst": false, 00:18:14.679 "dhchap_key": "key1", 00:18:14.679 "dhchap_ctrlr_key": "ckey2", 00:18:14.679 "allow_unrecognized_csi": false, 00:18:14.679 "method": "bdev_nvme_attach_controller", 00:18:14.679 "req_id": 1 00:18:14.679 } 00:18:14.679 Got JSON-RPC error response 00:18:14.679 response: 00:18:14.679 { 00:18:14.679 "code": -5, 00:18:14.679 "message": "Input/output error" 00:18:14.679 } 00:18:14.679 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:14.679 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.679 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:14.679 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.679 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:14.679 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.679 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.679 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.679 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:14.679 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.679 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.679 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.679 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.679 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:14.679 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.679 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:14.679 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.679 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:14.679 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.679 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.679 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.679 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.250 request: 00:18:15.250 { 00:18:15.250 "name": "nvme0", 00:18:15.250 "trtype": "tcp", 00:18:15.250 "traddr": "10.0.0.2", 00:18:15.250 "adrfam": "ipv4", 00:18:15.250 "trsvcid": "4420", 00:18:15.250 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:15.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:15.250 "prchk_reftag": false, 00:18:15.250 "prchk_guard": false, 00:18:15.250 "hdgst": false, 00:18:15.250 "ddgst": false, 00:18:15.250 "dhchap_key": "key1", 00:18:15.250 "dhchap_ctrlr_key": "ckey1", 00:18:15.250 "allow_unrecognized_csi": false, 00:18:15.250 "method": "bdev_nvme_attach_controller", 00:18:15.250 "req_id": 1 00:18:15.250 } 00:18:15.250 Got JSON-RPC error response 00:18:15.250 response: 00:18:15.250 { 00:18:15.250 "code": -5, 00:18:15.250 "message": "Input/output error" 00:18:15.250 } 00:18:15.250 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:15.250 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:15.250 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:15.250 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:15.250 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.250 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.250 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.250 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.250 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 18151 00:18:15.250 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 18151 ']' 00:18:15.250 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 18151 00:18:15.250 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:15.250 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.250 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 18151 00:18:15.250 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:15.250 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:15.250 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 18151' 00:18:15.250 killing process with pid 18151 00:18:15.250 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 18151 00:18:15.250 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 18151 00:18:15.511 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:15.511 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:15.511 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:15.511 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.511 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=43266 00:18:15.511 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 43266 00:18:15.511 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:15.511 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 43266 ']' 00:18:15.511 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.511 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.511 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.511 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.511 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.452 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.452 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:16.452 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:16.452 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:16.452 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.452 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.452 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:16.452 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 43266 00:18:16.452 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 43266 ']' 00:18:16.452 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.452 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.452 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.452 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.452 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.452 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.452 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:16.452 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:16.452 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.452 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.452 null0 00:18:16.452 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.452 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:16.452 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.hCc 00:18:16.452 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.452 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.fcA ]] 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fcA 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.BGq 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.1SO ]] 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1SO 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.lp9 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Q5D ]] 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Q5D 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.BUf 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.714 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:16.714 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.714 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.284 nvme0n1 00:18:17.543 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.543 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.543 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.543 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.543 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.543 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.543 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.543 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.543 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.543 { 00:18:17.543 "cntlid": 1, 00:18:17.543 "qid": 0, 00:18:17.543 "state": "enabled", 00:18:17.543 "thread": "nvmf_tgt_poll_group_000", 00:18:17.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:17.543 "listen_address": { 00:18:17.543 "trtype": "TCP", 00:18:17.543 "adrfam": "IPv4", 00:18:17.543 "traddr": "10.0.0.2", 00:18:17.543 "trsvcid": "4420" 00:18:17.543 }, 00:18:17.543 "peer_address": { 00:18:17.543 "trtype": "TCP", 00:18:17.543 "adrfam": "IPv4", 00:18:17.543 "traddr": "10.0.0.1", 00:18:17.543 "trsvcid": "52432" 00:18:17.543 }, 00:18:17.543 "auth": { 00:18:17.543 "state": "completed", 00:18:17.543 "digest": "sha512", 00:18:17.543 "dhgroup": "ffdhe8192" 00:18:17.543 } 00:18:17.543 } 00:18:17.543 ]' 00:18:17.543 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.543 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.543 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.803 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:17.803 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.803 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.803 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.803 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.063 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:18:18.063 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:18:18.633 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.633 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.633 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.633 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.633 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.633 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:18.633 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.633 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.633 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.633 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:18.633 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:18.894 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:18.894 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:18.894 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:18.894 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:18.894 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:18.894 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:18.894 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:18.894 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:18.894 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:18.894 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:18.894 request: 00:18:18.894 { 00:18:18.894 "name": "nvme0", 00:18:18.894 "trtype": "tcp", 00:18:18.894 "traddr": "10.0.0.2", 00:18:18.894 "adrfam": "ipv4", 00:18:18.894 "trsvcid": "4420", 00:18:18.894 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:18.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:18.894 "prchk_reftag": false, 00:18:18.894 "prchk_guard": false, 00:18:18.894 "hdgst": false, 00:18:18.894 "ddgst": false, 00:18:18.894 "dhchap_key": "key3", 00:18:18.894 "allow_unrecognized_csi": false, 00:18:18.894 "method": "bdev_nvme_attach_controller", 00:18:18.894 "req_id": 1 00:18:18.894 } 00:18:18.894 Got JSON-RPC error response 00:18:18.894 response: 00:18:18.894 { 00:18:18.894 "code": -5, 00:18:18.894 "message": "Input/output error" 00:18:18.894 } 00:18:18.894 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:18.894 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:18.894 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:18.894 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:18.894 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:18.894 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:18.894 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:18.894 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:19.154 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:19.154 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:19.154 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:19.154 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:19.154 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.154 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:19.154 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.154 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:19.154 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.154 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.415 request: 00:18:19.415 { 00:18:19.415 "name": "nvme0", 00:18:19.415 "trtype": "tcp", 00:18:19.415 "traddr": "10.0.0.2", 00:18:19.415 "adrfam": "ipv4", 00:18:19.415 "trsvcid": "4420", 00:18:19.415 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:19.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:19.415 "prchk_reftag": false, 00:18:19.415 "prchk_guard": false, 00:18:19.415 "hdgst": false, 00:18:19.415 "ddgst": false, 00:18:19.415 "dhchap_key": "key3", 00:18:19.415 "allow_unrecognized_csi": false, 00:18:19.415 "method": "bdev_nvme_attach_controller", 00:18:19.415 "req_id": 1 00:18:19.415 } 00:18:19.415 Got JSON-RPC error response 00:18:19.415 response: 00:18:19.415 { 00:18:19.415 "code": -5, 00:18:19.415 "message": "Input/output error" 00:18:19.415 } 00:18:19.415 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:19.415 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:19.415 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:19.415 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:19.415 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:19.415 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:19.415 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:19.415 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:19.415 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:19.415 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:19.415 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.415 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.415 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.415 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.415 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.415 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.415 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.675 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.675 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:19.675 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:19.675 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:19.675 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:19.675 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.675 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:19.675 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.675 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:19.675 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:19.675 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:19.935 request: 00:18:19.935 { 00:18:19.935 "name": "nvme0", 00:18:19.935 "trtype": "tcp", 00:18:19.935 "traddr": "10.0.0.2", 00:18:19.935 "adrfam": "ipv4", 00:18:19.935 "trsvcid": "4420", 00:18:19.935 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:19.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:19.935 "prchk_reftag": false, 00:18:19.935 "prchk_guard": false, 00:18:19.935 "hdgst": false, 00:18:19.935 "ddgst": false, 00:18:19.935 "dhchap_key": "key0", 00:18:19.935 "dhchap_ctrlr_key": "key1", 00:18:19.935 "allow_unrecognized_csi": false, 00:18:19.935 "method": "bdev_nvme_attach_controller", 00:18:19.935 "req_id": 1 00:18:19.935 } 00:18:19.935 Got JSON-RPC error response 00:18:19.935 response: 00:18:19.935 { 00:18:19.935 "code": -5, 00:18:19.935 "message": "Input/output error" 00:18:19.935 } 00:18:19.935 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:19.935 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:19.935 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:19.935 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:19.935 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:19.935 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:19.935 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:20.196 nvme0n1 00:18:20.196 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:20.196 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:20.196 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.196 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.196 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.196 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.457 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:20.457 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.457 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.457 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.457 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:20.457 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:20.457 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:21.397 nvme0n1 00:18:21.397 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:21.397 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:21.397 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.397 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.397 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:21.397 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.397 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.397 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.397 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:21.397 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:21.397 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.658 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.658 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:18:21.658 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: --dhchap-ctrl-secret DHHC-1:03:YjY2N2Y2MDM1OTY2M2I0ODI2ZTFmMDIwZjM1N2MzYTBmM2Y5YmQ0ZWQwZWQ3NTNmM2YxNTM5OGI3ZGU5MGM0MAF8hd4=: 00:18:22.228 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:22.228 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:22.228 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:22.228 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:22.228 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:22.228 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:22.228 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:22.228 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.228 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.228 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:22.228 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:22.228 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:22.228 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:22.228 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.228 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:22.228 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.228 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:22.228 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:22.229 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:22.800 request: 00:18:22.800 { 00:18:22.800 "name": "nvme0", 00:18:22.800 "trtype": "tcp", 00:18:22.800 "traddr": "10.0.0.2", 00:18:22.800 "adrfam": "ipv4", 00:18:22.800 "trsvcid": "4420", 00:18:22.800 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:22.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:22.800 "prchk_reftag": false, 00:18:22.800 "prchk_guard": false, 00:18:22.800 "hdgst": false, 00:18:22.800 "ddgst": false, 00:18:22.800 "dhchap_key": "key1", 00:18:22.800 "allow_unrecognized_csi": false, 00:18:22.800 "method": "bdev_nvme_attach_controller", 00:18:22.800 "req_id": 1 00:18:22.800 } 00:18:22.800 Got JSON-RPC error response 00:18:22.800 response: 00:18:22.800 { 00:18:22.800 "code": -5, 00:18:22.800 "message": "Input/output error" 00:18:22.800 } 00:18:22.800 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:22.800 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:22.800 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:22.800 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:22.800 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:22.800 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:22.800 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:23.372 nvme0n1 00:18:23.633 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:23.633 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:23.633 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.633 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.633 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.633 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.893 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.893 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.893 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.893 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.893 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:23.893 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:23.893 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:24.154 nvme0n1 00:18:24.154 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:24.154 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:24.154 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.416 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.416 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.416 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.416 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:24.416 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.416 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.416 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.416 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: '' 2s 00:18:24.416 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:24.416 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:24.416 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: 00:18:24.416 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:24.416 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:24.416 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:24.416 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: ]] 00:18:24.416 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YzA4NDlmODUyNWQ2MWYzNDc0MzdhYTc0YWI4YWVlYjEneo3n: 00:18:24.416 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:24.416 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:24.416 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:26.958 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:26.958 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:26.958 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:26.958 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:26.958 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:26.958 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:26.958 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:26.958 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:26.958 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.958 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.958 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.958 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: 2s 00:18:26.958 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:26.958 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:26.958 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:26.958 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: 00:18:26.958 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:26.958 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:26.958 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:26.958 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: ]] 00:18:26.958 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MjkyNjA3YzhmOTc5YmY4MjM4YjQyYjNhOGNmMzdiMjQ5Y2NlZjlhMWYwMTQ2ZTc1QzBfCg==: 00:18:26.958 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:26.958 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:28.868 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:28.868 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:28.868 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:28.868 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:28.868 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:28.868 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:28.868 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:28.868 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.868 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:28.868 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.868 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.868 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.868 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:28.868 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:28.868 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:29.438 nvme0n1 00:18:29.438 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:29.438 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.438 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.438 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.438 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:29.438 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:29.699 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:29.699 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.699 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:29.959 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.959 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.959 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.959 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.959 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.959 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:29.959 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:30.220 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:30.220 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:30.220 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.220 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.220 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:30.220 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.220 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.220 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.220 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:30.220 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:30.220 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:30.220 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:30.220 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.220 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:30.220 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.220 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:30.220 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:30.792 request: 00:18:30.792 { 00:18:30.792 "name": "nvme0", 00:18:30.792 "dhchap_key": "key1", 00:18:30.792 "dhchap_ctrlr_key": "key3", 00:18:30.792 "method": "bdev_nvme_set_keys", 00:18:30.792 "req_id": 1 00:18:30.792 } 00:18:30.792 Got JSON-RPC error response 00:18:30.792 response: 00:18:30.792 { 00:18:30.792 "code": -13, 00:18:30.792 "message": "Permission denied" 00:18:30.792 } 00:18:30.792 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:30.792 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:30.792 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:30.792 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:30.792 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:30.792 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:30.792 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.054 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:31.054 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:31.997 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:31.997 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:31.997 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.997 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:31.997 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:31.997 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.997 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.258 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.258 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:32.258 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:32.258 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:32.830 nvme0n1 00:18:32.830 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:32.830 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.830 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.830 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.830 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:32.830 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:32.830 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:32.830 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:32.830 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:32.830 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:32.830 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:32.830 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:32.830 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:33.403 request: 00:18:33.403 { 00:18:33.403 "name": "nvme0", 00:18:33.403 "dhchap_key": "key2", 00:18:33.403 "dhchap_ctrlr_key": "key0", 00:18:33.403 "method": "bdev_nvme_set_keys", 00:18:33.403 "req_id": 1 00:18:33.403 } 00:18:33.403 Got JSON-RPC error response 00:18:33.403 response: 00:18:33.403 { 00:18:33.403 "code": -13, 00:18:33.403 "message": "Permission denied" 00:18:33.403 } 00:18:33.403 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:33.403 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:33.403 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:33.403 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:33.403 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:33.403 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:33.403 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.403 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:33.403 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:34.790 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:34.790 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:34.790 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.790 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:34.790 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:34.790 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:34.790 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 18186 00:18:34.790 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 18186 ']' 00:18:34.790 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 18186 00:18:34.790 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:34.790 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.790 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 18186 00:18:34.790 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:34.790 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:34.790 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 18186' 00:18:34.790 killing process with pid 18186 00:18:34.790 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 18186 00:18:34.790 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 18186 00:18:35.052 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:35.052 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:35.052 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # sync 00:18:35.052 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:18:35.052 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # set +e 00:18:35.052 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # for i in {1..20} 00:18:35.052 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:18:35.052 rmmod nvme_tcp 00:18:35.052 rmmod nvme_fabrics 00:18:35.052 rmmod nvme_keyring 00:18:35.052 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:18:35.052 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # set -e 00:18:35.052 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@130 -- # return 0 00:18:35.052 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 43266 ']' 00:18:35.052 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 43266 00:18:35.052 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 43266 ']' 00:18:35.052 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 43266 00:18:35.052 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:35.052 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.052 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 43266 00:18:35.052 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:35.052 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:35.052 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 43266' 00:18:35.052 killing process with pid 43266 00:18:35.052 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 43266 00:18:35.052 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 43266 00:18:35.313 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:35.313 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:35.313 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:35.314 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # iptr 00:18:35.314 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:18:35.314 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:35.314 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:18:35.314 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:35.314 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # remove_spdk_ns 00:18:35.314 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.314 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:35.314 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.230 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:18:37.230 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.hCc /tmp/spdk.key-sha256.BGq /tmp/spdk.key-sha384.lp9 /tmp/spdk.key-sha512.BUf /tmp/spdk.key-sha512.fcA /tmp/spdk.key-sha384.1SO /tmp/spdk.key-sha256.Q5D '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:37.230 00:18:37.230 real 2m33.795s 00:18:37.230 user 5m45.872s 00:18:37.230 sys 0m24.160s 00:18:37.230 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:37.230 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.230 ************************************ 00:18:37.230 END TEST nvmf_auth_target 00:18:37.230 ************************************ 00:18:37.230 11:53:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:37.230 11:53:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:37.230 11:53:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:37.230 11:53:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:37.230 11:53:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:37.230 ************************************ 00:18:37.230 START TEST nvmf_bdevio_no_huge 00:18:37.230 ************************************ 00:18:37.230 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:37.492 * Looking for test storage... 00:18:37.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:37.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.492 --rc genhtml_branch_coverage=1 00:18:37.492 --rc genhtml_function_coverage=1 00:18:37.492 --rc genhtml_legend=1 00:18:37.492 --rc geninfo_all_blocks=1 00:18:37.492 --rc geninfo_unexecuted_blocks=1 00:18:37.492 00:18:37.492 ' 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:37.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.492 --rc genhtml_branch_coverage=1 00:18:37.492 --rc genhtml_function_coverage=1 00:18:37.492 --rc genhtml_legend=1 00:18:37.492 --rc geninfo_all_blocks=1 00:18:37.492 --rc geninfo_unexecuted_blocks=1 00:18:37.492 00:18:37.492 ' 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:37.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.492 --rc genhtml_branch_coverage=1 00:18:37.492 --rc genhtml_function_coverage=1 00:18:37.492 --rc genhtml_legend=1 00:18:37.492 --rc geninfo_all_blocks=1 00:18:37.492 --rc geninfo_unexecuted_blocks=1 00:18:37.492 00:18:37.492 ' 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:37.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.492 --rc genhtml_branch_coverage=1 00:18:37.492 --rc genhtml_function_coverage=1 00:18:37.492 --rc genhtml_legend=1 00:18:37.492 --rc geninfo_all_blocks=1 00:18:37.492 --rc geninfo_unexecuted_blocks=1 00:18:37.492 00:18:37.492 ' 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.492 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # : 0 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:18:37.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@56 -- # have_pci_nics=0 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # xtrace_disable 00:18:37.493 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_devs=() 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_devs 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_net_devs=() 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # pci_drivers=() 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # local -A pci_drivers 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # net_devs=() 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga net_devs 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # e810=() 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga e810 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # x722=() 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga x722 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@323 -- # mlx=() 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@323 -- # local -ga mlx 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:45.643 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:45.643 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:45.643 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:45.643 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # is_hw=yes 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:45.643 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:18:45.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:18:45.644 00:18:45.644 --- 10.0.0.2 ping statistics --- 00:18:45.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.644 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:45.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:18:45.644 00:18:45.644 --- 10.0.0.1 ping statistics --- 00:18:45.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.644 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # return 0 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=51393 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 51393 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 51393 ']' 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.644 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.644 [2024-12-09 11:53:52.852588] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:18:45.644 [2024-12-09 11:53:52.852675] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:45.644 [2024-12-09 11:53:52.958550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:45.644 [2024-12-09 11:53:53.019882] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.644 [2024-12-09 11:53:53.019933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.644 [2024-12-09 11:53:53.019942] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.644 [2024-12-09 11:53:53.019949] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.644 [2024-12-09 11:53:53.019956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.644 [2024-12-09 11:53:53.021469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:45.644 [2024-12-09 11:53:53.021628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:45.644 [2024-12-09 11:53:53.021789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:45.644 [2024-12-09 11:53:53.021897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:45.905 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.905 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:45.905 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:45.905 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:45.905 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.905 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.906 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:45.906 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.906 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.906 [2024-12-09 11:53:53.728390] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.906 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.906 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:45.906 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.906 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.906 Malloc0 00:18:45.906 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.906 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:45.906 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.906 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.906 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.906 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:45.906 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.906 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.906 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.906 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:45.906 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.906 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.906 [2024-12-09 11:53:53.782409] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.906 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.906 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:45.906 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:46.167 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:18:46.167 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:18:46.167 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:18:46.167 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:18:46.167 { 00:18:46.167 "params": { 00:18:46.167 "name": "Nvme$subsystem", 00:18:46.167 "trtype": "$TEST_TRANSPORT", 00:18:46.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:46.167 "adrfam": "ipv4", 00:18:46.167 "trsvcid": "$NVMF_PORT", 00:18:46.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:46.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:46.167 "hdgst": ${hdgst:-false}, 00:18:46.167 "ddgst": ${ddgst:-false} 00:18:46.167 }, 00:18:46.167 "method": "bdev_nvme_attach_controller" 00:18:46.167 } 00:18:46.167 EOF 00:18:46.167 )") 00:18:46.167 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:18:46.167 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:18:46.167 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:18:46.167 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:18:46.167 "params": { 00:18:46.167 "name": "Nvme1", 00:18:46.167 "trtype": "tcp", 00:18:46.167 "traddr": "10.0.0.2", 00:18:46.167 "adrfam": "ipv4", 00:18:46.167 "trsvcid": "4420", 00:18:46.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:46.167 "hdgst": false, 00:18:46.167 "ddgst": false 00:18:46.167 }, 00:18:46.167 "method": "bdev_nvme_attach_controller" 00:18:46.167 }' 00:18:46.167 [2024-12-09 11:53:53.842125] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:18:46.167 [2024-12-09 11:53:53.842195] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid51747 ] 00:18:46.167 [2024-12-09 11:53:53.936213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:46.167 [2024-12-09 11:53:53.996377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.167 [2024-12-09 11:53:53.996502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.167 [2024-12-09 11:53:53.996506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.427 I/O targets: 00:18:46.427 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:46.427 00:18:46.427 00:18:46.427 CUnit - A unit testing framework for C - Version 2.1-3 00:18:46.427 http://cunit.sourceforge.net/ 00:18:46.427 00:18:46.427 00:18:46.427 Suite: bdevio tests on: Nvme1n1 00:18:46.427 Test: blockdev write read block ...passed 00:18:46.427 Test: blockdev write zeroes read block ...passed 00:18:46.427 Test: blockdev write zeroes read no split ...passed 00:18:46.686 Test: blockdev write zeroes read split ...passed 00:18:46.686 Test: blockdev write zeroes read split partial ...passed 00:18:46.686 Test: blockdev reset ...[2024-12-09 11:53:54.425276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:46.686 [2024-12-09 11:53:54.425346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1895430 (9): Bad file descriptor 00:18:46.686 [2024-12-09 11:53:54.441864] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:46.686 passed 00:18:46.686 Test: blockdev write read 8 blocks ...passed 00:18:46.686 Test: blockdev write read size > 128k ...passed 00:18:46.686 Test: blockdev write read invalid size ...passed 00:18:46.686 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:46.686 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:46.686 Test: blockdev write read max offset ...passed 00:18:46.946 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:46.946 Test: blockdev writev readv 8 blocks ...passed 00:18:46.946 Test: blockdev writev readv 30 x 1block ...passed 00:18:46.946 Test: blockdev writev readv block ...passed 00:18:46.946 Test: blockdev writev readv size > 128k ...passed 00:18:46.946 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:46.946 Test: blockdev comparev and writev ...[2024-12-09 11:53:54.747290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.946 [2024-12-09 11:53:54.747317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:46.946 [2024-12-09 11:53:54.747328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.946 [2024-12-09 11:53:54.747334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.946 [2024-12-09 11:53:54.747785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.946 [2024-12-09 11:53:54.747793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:46.946 [2024-12-09 11:53:54.747803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.946 [2024-12-09 11:53:54.747809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:46.947 [2024-12-09 11:53:54.748280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.947 [2024-12-09 11:53:54.748289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:46.947 [2024-12-09 11:53:54.748299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.947 [2024-12-09 11:53:54.748305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:46.947 [2024-12-09 11:53:54.748804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.947 [2024-12-09 11:53:54.748812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:46.947 [2024-12-09 11:53:54.748821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:46.947 [2024-12-09 11:53:54.748826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:46.947 passed 00:18:47.207 Test: blockdev nvme passthru rw ...passed 00:18:47.207 Test: blockdev nvme passthru vendor specific ...[2024-12-09 11:53:54.833478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:47.207 [2024-12-09 11:53:54.833489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:47.207 [2024-12-09 11:53:54.833809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:47.207 [2024-12-09 11:53:54.833818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:47.207 [2024-12-09 11:53:54.834161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:47.207 [2024-12-09 11:53:54.834168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:47.207 [2024-12-09 11:53:54.834502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:47.207 [2024-12-09 11:53:54.834510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:47.207 passed 00:18:47.207 Test: blockdev nvme admin passthru ...passed 00:18:47.207 Test: blockdev copy ...passed 00:18:47.207 00:18:47.207 Run Summary: Type Total Ran Passed Failed Inactive 00:18:47.207 suites 1 1 n/a 0 0 00:18:47.207 tests 23 23 23 0 0 00:18:47.207 asserts 152 152 152 0 n/a 00:18:47.207 00:18:47.207 Elapsed time = 1.353 seconds 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # sync 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # set +e 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # for i in {1..20} 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:18:47.474 rmmod nvme_tcp 00:18:47.474 rmmod nvme_fabrics 00:18:47.474 rmmod nvme_keyring 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # set -e 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@130 -- # return 0 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 51393 ']' 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 51393 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 51393 ']' 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 51393 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 51393 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 51393' 00:18:47.474 killing process with pid 51393 00:18:47.474 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 51393 00:18:47.475 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 51393 00:18:47.735 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:47.735 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:47.735 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:47.735 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # iptr 00:18:47.735 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:18:47.736 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:47.736 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:18:47.736 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:47.736 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # remove_spdk_ns 00:18:47.736 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.736 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:47.736 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:18:50.282 00:18:50.282 real 0m12.543s 00:18:50.282 user 0m14.386s 00:18:50.282 sys 0m6.726s 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:50.282 ************************************ 00:18:50.282 END TEST nvmf_bdevio_no_huge 00:18:50.282 ************************************ 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:50.282 ************************************ 00:18:50.282 START TEST nvmf_tls 00:18:50.282 ************************************ 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:50.282 * Looking for test storage... 00:18:50.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:50.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.282 --rc genhtml_branch_coverage=1 00:18:50.282 --rc genhtml_function_coverage=1 00:18:50.282 --rc genhtml_legend=1 00:18:50.282 --rc geninfo_all_blocks=1 00:18:50.282 --rc geninfo_unexecuted_blocks=1 00:18:50.282 00:18:50.282 ' 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:50.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.282 --rc genhtml_branch_coverage=1 00:18:50.282 --rc genhtml_function_coverage=1 00:18:50.282 --rc genhtml_legend=1 00:18:50.282 --rc geninfo_all_blocks=1 00:18:50.282 --rc geninfo_unexecuted_blocks=1 00:18:50.282 00:18:50.282 ' 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:50.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.282 --rc genhtml_branch_coverage=1 00:18:50.282 --rc genhtml_function_coverage=1 00:18:50.282 --rc genhtml_legend=1 00:18:50.282 --rc geninfo_all_blocks=1 00:18:50.282 --rc geninfo_unexecuted_blocks=1 00:18:50.282 00:18:50.282 ' 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:50.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.282 --rc genhtml_branch_coverage=1 00:18:50.282 --rc genhtml_function_coverage=1 00:18:50.282 --rc genhtml_legend=1 00:18:50.282 --rc geninfo_all_blocks=1 00:18:50.282 --rc geninfo_unexecuted_blocks=1 00:18:50.282 00:18:50.282 ' 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:18:50.282 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # : 0 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:18:50.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@56 -- # have_pci_nics=0 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # xtrace_disable 00:18:50.283 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_devs=() 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_devs 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_net_devs=() 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # pci_drivers=() 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # local -A pci_drivers 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # net_devs=() 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga net_devs 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # e810=() 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga e810 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # x722=() 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga x722 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@323 -- # mlx=() 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@323 -- # local -ga mlx 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:58.470 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:58.470 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.470 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:58.470 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:58.470 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # is_hw=yes 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:58.470 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:18:58.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:58.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:18:58.471 00:18:58.471 --- 10.0.0.2 ping statistics --- 00:18:58.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.471 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:18:58.471 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:58.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:58.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:18:58.471 00:18:58.471 --- 10.0.0.1 ping statistics --- 00:18:58.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.471 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:18:58.471 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:58.471 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # return 0 00:18:58.471 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:58.471 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:58.471 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:58.471 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:58.471 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:58.471 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:58.471 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:58.471 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:58.471 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:58.471 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:58.471 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.471 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=56204 00:18:58.471 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 56204 00:18:58.471 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:58.471 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 56204 ']' 00:18:58.471 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.471 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.471 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.471 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.471 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.471 [2024-12-09 11:54:05.420280] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:18:58.471 [2024-12-09 11:54:05.420354] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.471 [2024-12-09 11:54:05.523239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.471 [2024-12-09 11:54:05.572884] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.471 [2024-12-09 11:54:05.572937] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.471 [2024-12-09 11:54:05.572945] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.471 [2024-12-09 11:54:05.572953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.471 [2024-12-09 11:54:05.572959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.471 [2024-12-09 11:54:05.573727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.471 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.471 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:58.471 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:58.471 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:58.471 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.471 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.471 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:58.471 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:58.761 true 00:18:58.761 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:58.761 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:59.052 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:59.052 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:59.052 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:59.053 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:59.053 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:59.321 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:59.321 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:59.321 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:59.583 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:59.583 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:59.583 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:59.583 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:59.583 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:59.583 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:59.844 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:59.844 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:59.844 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:00.107 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:00.107 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:00.107 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:00.107 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:00.107 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:00.369 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:00.369 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.yjgAodulAk 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.xtbNTKtn7s 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.yjgAodulAk 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.xtbNTKtn7s 00:19:00.631 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:00.892 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:01.152 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.yjgAodulAk 00:19:01.152 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yjgAodulAk 00:19:01.152 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:01.152 [2024-12-09 11:54:08.988891] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.152 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:01.412 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:01.673 [2024-12-09 11:54:09.325707] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:01.673 [2024-12-09 11:54:09.325892] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.673 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:01.673 malloc0 00:19:01.673 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:01.934 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yjgAodulAk 00:19:02.195 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:02.195 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.yjgAodulAk 00:19:14.426 Initializing NVMe Controllers 00:19:14.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:14.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:14.426 Initialization complete. Launching workers. 00:19:14.426 ======================================================== 00:19:14.426 Latency(us) 00:19:14.426 Device Information : IOPS MiB/s Average min max 00:19:14.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18690.17 73.01 3424.48 1101.42 4068.49 00:19:14.426 ======================================================== 00:19:14.426 Total : 18690.17 73.01 3424.48 1101.42 4068.49 00:19:14.426 00:19:14.426 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yjgAodulAk 00:19:14.426 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:14.426 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:14.426 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:14.426 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yjgAodulAk 00:19:14.426 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:14.426 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=59665 00:19:14.426 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:14.426 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 59665 /var/tmp/bdevperf.sock 00:19:14.426 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 59665 ']' 00:19:14.426 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.426 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:14.426 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.426 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.426 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.426 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.426 [2024-12-09 11:54:20.185184] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:19:14.426 [2024-12-09 11:54:20.185241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59665 ] 00:19:14.426 [2024-12-09 11:54:20.241663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.426 [2024-12-09 11:54:20.270778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.426 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:14.426 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:14.426 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yjgAodulAk 00:19:14.426 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:14.427 [2024-12-09 11:54:20.689319] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:14.427 TLSTESTn1 00:19:14.427 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:14.427 Running I/O for 10 seconds... 00:19:14.997 5298.00 IOPS, 20.70 MiB/s [2024-12-09T10:54:24.269Z] 5830.00 IOPS, 22.77 MiB/s [2024-12-09T10:54:25.211Z] 5968.33 IOPS, 23.31 MiB/s [2024-12-09T10:54:26.151Z] 5789.50 IOPS, 22.62 MiB/s [2024-12-09T10:54:27.093Z] 5850.60 IOPS, 22.85 MiB/s [2024-12-09T10:54:28.037Z] 5952.50 IOPS, 23.25 MiB/s [2024-12-09T10:54:28.979Z] 5857.86 IOPS, 22.88 MiB/s [2024-12-09T10:54:29.922Z] 5762.12 IOPS, 22.51 MiB/s [2024-12-09T10:54:31.308Z] 5788.00 IOPS, 22.61 MiB/s [2024-12-09T10:54:31.308Z] 5827.90 IOPS, 22.77 MiB/s 00:19:23.422 Latency(us) 00:19:23.422 [2024-12-09T10:54:31.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.422 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:23.422 Verification LBA range: start 0x0 length 0x2000 00:19:23.422 TLSTESTn1 : 10.01 5832.45 22.78 0.00 0.00 21914.46 5543.25 77332.48 00:19:23.422 [2024-12-09T10:54:31.308Z] =================================================================================================================== 00:19:23.422 [2024-12-09T10:54:31.308Z] Total : 5832.45 22.78 0.00 0.00 21914.46 5543.25 77332.48 00:19:23.422 { 00:19:23.422 "results": [ 00:19:23.422 { 00:19:23.422 "job": "TLSTESTn1", 00:19:23.422 "core_mask": "0x4", 00:19:23.422 "workload": "verify", 00:19:23.422 "status": "finished", 00:19:23.422 "verify_range": { 00:19:23.422 "start": 0, 00:19:23.422 "length": 8192 00:19:23.422 }, 00:19:23.422 "queue_depth": 128, 00:19:23.422 "io_size": 4096, 00:19:23.422 "runtime": 10.013799, 00:19:23.422 "iops": 5832.451799761509, 00:19:23.422 "mibps": 22.783014842818396, 00:19:23.422 "io_failed": 0, 00:19:23.422 "io_timeout": 0, 00:19:23.422 "avg_latency_us": 21914.455364209687, 00:19:23.422 "min_latency_us": 5543.253333333333, 00:19:23.422 "max_latency_us": 77332.48 00:19:23.422 } 00:19:23.422 ], 00:19:23.422 "core_count": 1 00:19:23.422 } 00:19:23.422 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:23.422 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 59665 00:19:23.422 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 59665 ']' 00:19:23.422 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 59665 00:19:23.422 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:23.422 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.422 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59665 00:19:23.422 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59665' 00:19:23.422 killing process with pid 59665 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 59665 00:19:23.422 Received shutdown signal, test time was about 10.000000 seconds 00:19:23.422 00:19:23.422 Latency(us) 00:19:23.422 [2024-12-09T10:54:31.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.422 [2024-12-09T10:54:31.308Z] =================================================================================================================== 00:19:23.422 [2024-12-09T10:54:31.308Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 59665 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xtbNTKtn7s 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xtbNTKtn7s 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xtbNTKtn7s 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xtbNTKtn7s 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=61746 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 61746 /var/tmp/bdevperf.sock 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 61746 ']' 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:23.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.422 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.422 [2024-12-09 11:54:31.156427] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:19:23.423 [2024-12-09 11:54:31.156483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61746 ] 00:19:23.423 [2024-12-09 11:54:31.214961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.423 [2024-12-09 11:54:31.242630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.683 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.683 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:23.684 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xtbNTKtn7s 00:19:23.684 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:23.944 [2024-12-09 11:54:31.657086] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:23.944 [2024-12-09 11:54:31.661777] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:23.944 [2024-12-09 11:54:31.662394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1819800 (107): Transport endpoint is not connected 00:19:23.944 [2024-12-09 11:54:31.663389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1819800 (9): Bad file descriptor 00:19:23.944 [2024-12-09 11:54:31.664392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:23.944 [2024-12-09 11:54:31.664398] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:23.944 [2024-12-09 11:54:31.664404] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:23.944 [2024-12-09 11:54:31.664412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:23.944 request: 00:19:23.944 { 00:19:23.944 "name": "TLSTEST", 00:19:23.944 "trtype": "tcp", 00:19:23.944 "traddr": "10.0.0.2", 00:19:23.944 "adrfam": "ipv4", 00:19:23.944 "trsvcid": "4420", 00:19:23.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:23.944 "prchk_reftag": false, 00:19:23.944 "prchk_guard": false, 00:19:23.944 "hdgst": false, 00:19:23.944 "ddgst": false, 00:19:23.944 "psk": "key0", 00:19:23.944 "allow_unrecognized_csi": false, 00:19:23.944 "method": "bdev_nvme_attach_controller", 00:19:23.944 "req_id": 1 00:19:23.944 } 00:19:23.944 Got JSON-RPC error response 00:19:23.944 response: 00:19:23.944 { 00:19:23.944 "code": -5, 00:19:23.944 "message": "Input/output error" 00:19:23.944 } 00:19:23.944 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 61746 00:19:23.944 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 61746 ']' 00:19:23.944 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 61746 00:19:23.944 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:23.944 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.944 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61746 00:19:23.944 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:23.944 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:23.944 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61746' 00:19:23.944 killing process with pid 61746 00:19:23.944 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 61746 00:19:23.944 Received shutdown signal, test time was about 10.000000 seconds 00:19:23.944 00:19:23.944 Latency(us) 00:19:23.944 [2024-12-09T10:54:31.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.944 [2024-12-09T10:54:31.830Z] =================================================================================================================== 00:19:23.944 [2024-12-09T10:54:31.830Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:23.944 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 61746 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yjgAodulAk 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yjgAodulAk 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yjgAodulAk 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yjgAodulAk 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=61785 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 61785 /var/tmp/bdevperf.sock 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 61785 ']' 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.205 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.205 [2024-12-09 11:54:31.905865] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:19:24.205 [2024-12-09 11:54:31.905920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61785 ] 00:19:24.205 [2024-12-09 11:54:31.964232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.205 [2024-12-09 11:54:31.992817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.205 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.205 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:24.205 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yjgAodulAk 00:19:24.465 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:24.726 [2024-12-09 11:54:32.403295] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:24.726 [2024-12-09 11:54:32.411613] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:24.726 [2024-12-09 11:54:32.411631] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:24.726 [2024-12-09 11:54:32.411658] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:24.726 [2024-12-09 11:54:32.412466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe25800 (107): Transport endpoint is not connected 00:19:24.726 [2024-12-09 11:54:32.413462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe25800 (9): Bad file descriptor 00:19:24.726 [2024-12-09 11:54:32.414464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:24.726 [2024-12-09 11:54:32.414472] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:24.726 [2024-12-09 11:54:32.414478] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:24.726 [2024-12-09 11:54:32.414486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:24.726 request: 00:19:24.726 { 00:19:24.726 "name": "TLSTEST", 00:19:24.726 "trtype": "tcp", 00:19:24.726 "traddr": "10.0.0.2", 00:19:24.726 "adrfam": "ipv4", 00:19:24.726 "trsvcid": "4420", 00:19:24.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.726 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:24.726 "prchk_reftag": false, 00:19:24.726 "prchk_guard": false, 00:19:24.726 "hdgst": false, 00:19:24.726 "ddgst": false, 00:19:24.726 "psk": "key0", 00:19:24.726 "allow_unrecognized_csi": false, 00:19:24.726 "method": "bdev_nvme_attach_controller", 00:19:24.726 "req_id": 1 00:19:24.726 } 00:19:24.726 Got JSON-RPC error response 00:19:24.726 response: 00:19:24.726 { 00:19:24.726 "code": -5, 00:19:24.726 "message": "Input/output error" 00:19:24.726 } 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 61785 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 61785 ']' 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 61785 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61785 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61785' 00:19:24.726 killing process with pid 61785 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 61785 00:19:24.726 Received shutdown signal, test time was about 10.000000 seconds 00:19:24.726 00:19:24.726 Latency(us) 00:19:24.726 [2024-12-09T10:54:32.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.726 [2024-12-09T10:54:32.612Z] =================================================================================================================== 00:19:24.726 [2024-12-09T10:54:32.612Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 61785 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yjgAodulAk 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yjgAodulAk 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yjgAodulAk 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yjgAodulAk 00:19:24.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:24.985 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=62102 00:19:24.985 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:24.985 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 62102 /var/tmp/bdevperf.sock 00:19:24.985 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:24.985 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 62102 ']' 00:19:24.985 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.985 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.985 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.985 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.985 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.985 [2024-12-09 11:54:32.669560] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:19:24.986 [2024-12-09 11:54:32.669615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62102 ] 00:19:24.986 [2024-12-09 11:54:32.728399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.986 [2024-12-09 11:54:32.756036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.986 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.986 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:24.986 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yjgAodulAk 00:19:25.245 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:25.506 [2024-12-09 11:54:33.174519] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:25.506 [2024-12-09 11:54:33.183501] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:25.506 [2024-12-09 11:54:33.183519] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:25.506 [2024-12-09 11:54:33.183538] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:25.506 [2024-12-09 11:54:33.183712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d0800 (107): Transport endpoint is not connected 00:19:25.506 [2024-12-09 11:54:33.184707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d0800 (9): Bad file descriptor 00:19:25.506 [2024-12-09 11:54:33.185710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:25.506 [2024-12-09 11:54:33.185717] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:25.506 [2024-12-09 11:54:33.185723] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:25.506 [2024-12-09 11:54:33.185731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:25.506 request: 00:19:25.506 { 00:19:25.506 "name": "TLSTEST", 00:19:25.506 "trtype": "tcp", 00:19:25.506 "traddr": "10.0.0.2", 00:19:25.506 "adrfam": "ipv4", 00:19:25.506 "trsvcid": "4420", 00:19:25.506 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:25.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:25.506 "prchk_reftag": false, 00:19:25.506 "prchk_guard": false, 00:19:25.506 "hdgst": false, 00:19:25.506 "ddgst": false, 00:19:25.506 "psk": "key0", 00:19:25.506 "allow_unrecognized_csi": false, 00:19:25.506 "method": "bdev_nvme_attach_controller", 00:19:25.506 "req_id": 1 00:19:25.506 } 00:19:25.506 Got JSON-RPC error response 00:19:25.506 response: 00:19:25.506 { 00:19:25.506 "code": -5, 00:19:25.506 "message": "Input/output error" 00:19:25.506 } 00:19:25.506 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 62102 00:19:25.506 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 62102 ']' 00:19:25.506 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 62102 00:19:25.506 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:25.506 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.506 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62102 00:19:25.506 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:25.506 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:25.506 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62102' 00:19:25.506 killing process with pid 62102 00:19:25.506 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 62102 00:19:25.507 Received shutdown signal, test time was about 10.000000 seconds 00:19:25.507 00:19:25.507 Latency(us) 00:19:25.507 [2024-12-09T10:54:33.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.507 [2024-12-09T10:54:33.393Z] =================================================================================================================== 00:19:25.507 [2024-12-09T10:54:33.393Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 62102 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=62121 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 62121 /var/tmp/bdevperf.sock 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 62121 ']' 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.507 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.767 [2024-12-09 11:54:33.440796] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:19:25.767 [2024-12-09 11:54:33.440853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62121 ] 00:19:25.767 [2024-12-09 11:54:33.500451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.767 [2024-12-09 11:54:33.528546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.767 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.767 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:25.767 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:26.026 [2024-12-09 11:54:33.750668] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:26.026 [2024-12-09 11:54:33.750696] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:26.026 request: 00:19:26.026 { 00:19:26.026 "name": "key0", 00:19:26.026 "path": "", 00:19:26.026 "method": "keyring_file_add_key", 00:19:26.026 "req_id": 1 00:19:26.026 } 00:19:26.026 Got JSON-RPC error response 00:19:26.026 response: 00:19:26.026 { 00:19:26.026 "code": -1, 00:19:26.026 "message": "Operation not permitted" 00:19:26.026 } 00:19:26.026 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:26.287 [2024-12-09 11:54:33.935211] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:26.287 [2024-12-09 11:54:33.935238] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:26.287 request: 00:19:26.287 { 00:19:26.287 "name": "TLSTEST", 00:19:26.287 "trtype": "tcp", 00:19:26.287 "traddr": "10.0.0.2", 00:19:26.287 "adrfam": "ipv4", 00:19:26.287 "trsvcid": "4420", 00:19:26.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:26.287 "prchk_reftag": false, 00:19:26.287 "prchk_guard": false, 00:19:26.287 "hdgst": false, 00:19:26.287 "ddgst": false, 00:19:26.287 "psk": "key0", 00:19:26.287 "allow_unrecognized_csi": false, 00:19:26.287 "method": "bdev_nvme_attach_controller", 00:19:26.287 "req_id": 1 00:19:26.287 } 00:19:26.287 Got JSON-RPC error response 00:19:26.287 response: 00:19:26.287 { 00:19:26.287 "code": -126, 00:19:26.287 "message": "Required key not available" 00:19:26.287 } 00:19:26.287 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 62121 00:19:26.287 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 62121 ']' 00:19:26.287 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 62121 00:19:26.287 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:26.287 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.287 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62121 00:19:26.287 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:26.287 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:26.287 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62121' 00:19:26.287 killing process with pid 62121 00:19:26.287 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 62121 00:19:26.287 Received shutdown signal, test time was about 10.000000 seconds 00:19:26.287 00:19:26.287 Latency(us) 00:19:26.287 [2024-12-09T10:54:34.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.287 [2024-12-09T10:54:34.173Z] =================================================================================================================== 00:19:26.287 [2024-12-09T10:54:34.173Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:26.287 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 62121 00:19:26.287 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:26.287 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:26.287 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:26.287 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:26.287 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:26.287 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 56204 00:19:26.287 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 56204 ']' 00:19:26.287 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 56204 00:19:26.287 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:26.287 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.287 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56204 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56204' 00:19:26.548 killing process with pid 56204 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 56204 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 56204 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.M7Tzoie8F3 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.M7Tzoie8F3 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=62465 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 62465 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 62465 ']' 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.548 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.548 [2024-12-09 11:54:34.414980] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:19:26.548 [2024-12-09 11:54:34.415031] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.810 [2024-12-09 11:54:34.468284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.810 [2024-12-09 11:54:34.496192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.810 [2024-12-09 11:54:34.496219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.810 [2024-12-09 11:54:34.496225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.810 [2024-12-09 11:54:34.496229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.810 [2024-12-09 11:54:34.496233] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.810 [2024-12-09 11:54:34.496679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.810 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.810 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:26.810 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:26.810 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:26.810 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.810 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.810 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.M7Tzoie8F3 00:19:26.810 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.M7Tzoie8F3 00:19:26.810 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:27.071 [2024-12-09 11:54:34.763338] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.071 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:27.071 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:27.333 [2024-12-09 11:54:35.076095] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:27.333 [2024-12-09 11:54:35.076290] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.333 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:27.594 malloc0 00:19:27.594 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:27.594 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.M7Tzoie8F3 00:19:27.855 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:27.855 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.M7Tzoie8F3 00:19:27.855 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:27.855 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:27.855 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:27.855 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.M7Tzoie8F3 00:19:27.855 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:27.855 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:27.855 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=62720 00:19:27.855 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:27.855 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 62720 /var/tmp/bdevperf.sock 00:19:27.855 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 62720 ']' 00:19:27.855 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.855 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.855 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.855 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.855 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.115 [2024-12-09 11:54:35.752618] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:19:28.116 [2024-12-09 11:54:35.752677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62720 ] 00:19:28.116 [2024-12-09 11:54:35.810950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.116 [2024-12-09 11:54:35.840561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.116 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.116 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:28.116 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.M7Tzoie8F3 00:19:28.376 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:28.636 [2024-12-09 11:54:36.263076] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:28.636 TLSTESTn1 00:19:28.636 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:28.636 Running I/O for 10 seconds... 00:19:30.965 4326.00 IOPS, 16.90 MiB/s [2024-12-09T10:54:39.793Z] 5346.50 IOPS, 20.88 MiB/s [2024-12-09T10:54:40.736Z] 5618.33 IOPS, 21.95 MiB/s [2024-12-09T10:54:41.675Z] 5669.25 IOPS, 22.15 MiB/s [2024-12-09T10:54:42.616Z] 5556.40 IOPS, 21.70 MiB/s [2024-12-09T10:54:43.557Z] 5597.50 IOPS, 21.87 MiB/s [2024-12-09T10:54:44.498Z] 5651.29 IOPS, 22.08 MiB/s [2024-12-09T10:54:45.881Z] 5692.62 IOPS, 22.24 MiB/s [2024-12-09T10:54:46.823Z] 5666.22 IOPS, 22.13 MiB/s [2024-12-09T10:54:46.823Z] 5630.90 IOPS, 22.00 MiB/s 00:19:38.937 Latency(us) 00:19:38.937 [2024-12-09T10:54:46.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.937 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:38.937 Verification LBA range: start 0x0 length 0x2000 00:19:38.937 TLSTESTn1 : 10.02 5633.63 22.01 0.00 0.00 22685.63 4614.83 43253.76 00:19:38.937 [2024-12-09T10:54:46.823Z] =================================================================================================================== 00:19:38.937 [2024-12-09T10:54:46.823Z] Total : 5633.63 22.01 0.00 0.00 22685.63 4614.83 43253.76 00:19:38.937 { 00:19:38.937 "results": [ 00:19:38.937 { 00:19:38.937 "job": "TLSTESTn1", 00:19:38.937 "core_mask": "0x4", 00:19:38.937 "workload": "verify", 00:19:38.937 "status": "finished", 00:19:38.937 "verify_range": { 00:19:38.937 "start": 0, 00:19:38.937 "length": 8192 00:19:38.937 }, 00:19:38.937 "queue_depth": 128, 00:19:38.937 "io_size": 4096, 00:19:38.937 "runtime": 10.017527, 00:19:38.937 "iops": 5633.625943808287, 00:19:38.937 "mibps": 22.00635134300112, 00:19:38.937 "io_failed": 0, 00:19:38.937 "io_timeout": 0, 00:19:38.937 "avg_latency_us": 22685.626021676857, 00:19:38.937 "min_latency_us": 4614.826666666667, 00:19:38.937 "max_latency_us": 43253.76 00:19:38.937 } 00:19:38.937 ], 00:19:38.937 "core_count": 1 00:19:38.937 } 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 62720 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 62720 ']' 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 62720 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62720 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62720' 00:19:38.937 killing process with pid 62720 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 62720 00:19:38.937 Received shutdown signal, test time was about 10.000000 seconds 00:19:38.937 00:19:38.937 Latency(us) 00:19:38.937 [2024-12-09T10:54:46.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.937 [2024-12-09T10:54:46.823Z] =================================================================================================================== 00:19:38.937 [2024-12-09T10:54:46.823Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 62720 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.M7Tzoie8F3 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.M7Tzoie8F3 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.M7Tzoie8F3 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.M7Tzoie8F3 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.M7Tzoie8F3 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=64837 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 64837 /var/tmp/bdevperf.sock 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 64837 ']' 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.937 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:38.938 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.938 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:38.938 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.938 [2024-12-09 11:54:46.733943] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:19:38.938 [2024-12-09 11:54:46.733997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64837 ] 00:19:38.938 [2024-12-09 11:54:46.792304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.938 [2024-12-09 11:54:46.819962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.198 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.198 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:39.198 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.M7Tzoie8F3 00:19:39.198 [2024-12-09 11:54:47.058084] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.M7Tzoie8F3': 0100666 00:19:39.198 [2024-12-09 11:54:47.058112] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:39.198 request: 00:19:39.198 { 00:19:39.198 "name": "key0", 00:19:39.198 "path": "/tmp/tmp.M7Tzoie8F3", 00:19:39.198 "method": "keyring_file_add_key", 00:19:39.198 "req_id": 1 00:19:39.198 } 00:19:39.198 Got JSON-RPC error response 00:19:39.198 response: 00:19:39.198 { 00:19:39.198 "code": -1, 00:19:39.198 "message": "Operation not permitted" 00:19:39.198 } 00:19:39.458 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:39.458 [2024-12-09 11:54:47.242623] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.458 [2024-12-09 11:54:47.242649] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:39.458 request: 00:19:39.458 { 00:19:39.458 "name": "TLSTEST", 00:19:39.458 "trtype": "tcp", 00:19:39.458 "traddr": "10.0.0.2", 00:19:39.458 "adrfam": "ipv4", 00:19:39.458 "trsvcid": "4420", 00:19:39.458 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:39.458 "prchk_reftag": false, 00:19:39.458 "prchk_guard": false, 00:19:39.458 "hdgst": false, 00:19:39.458 "ddgst": false, 00:19:39.458 "psk": "key0", 00:19:39.458 "allow_unrecognized_csi": false, 00:19:39.458 "method": "bdev_nvme_attach_controller", 00:19:39.458 "req_id": 1 00:19:39.458 } 00:19:39.458 Got JSON-RPC error response 00:19:39.458 response: 00:19:39.458 { 00:19:39.458 "code": -126, 00:19:39.458 "message": "Required key not available" 00:19:39.458 } 00:19:39.458 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 64837 00:19:39.458 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 64837 ']' 00:19:39.458 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 64837 00:19:39.458 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:39.458 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.458 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64837 00:19:39.458 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:39.458 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:39.458 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64837' 00:19:39.458 killing process with pid 64837 00:19:39.458 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 64837 00:19:39.458 Received shutdown signal, test time was about 10.000000 seconds 00:19:39.458 00:19:39.458 Latency(us) 00:19:39.458 [2024-12-09T10:54:47.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.458 [2024-12-09T10:54:47.344Z] =================================================================================================================== 00:19:39.458 [2024-12-09T10:54:47.344Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:39.458 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 64837 00:19:39.719 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:39.719 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:39.719 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:39.719 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:39.719 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:39.719 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 62465 00:19:39.719 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 62465 ']' 00:19:39.719 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 62465 00:19:39.719 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:39.719 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.719 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62465 00:19:39.719 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:39.719 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:39.719 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62465' 00:19:39.719 killing process with pid 62465 00:19:39.719 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 62465 00:19:39.719 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 62465 00:19:39.719 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:39.719 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:39.719 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:39.719 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.719 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:39.719 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=64910 00:19:39.720 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 64910 00:19:39.720 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 64910 ']' 00:19:39.720 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.720 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.720 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.720 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.720 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.981 [2024-12-09 11:54:47.610409] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:19:39.981 [2024-12-09 11:54:47.610453] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.981 [2024-12-09 11:54:47.663099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.981 [2024-12-09 11:54:47.691503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.981 [2024-12-09 11:54:47.691534] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.981 [2024-12-09 11:54:47.691539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.981 [2024-12-09 11:54:47.691544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.981 [2024-12-09 11:54:47.691549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.981 [2024-12-09 11:54:47.692010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.981 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.981 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:39.981 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:39.981 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:39.981 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.981 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.981 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.M7Tzoie8F3 00:19:39.981 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:39.981 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.M7Tzoie8F3 00:19:39.981 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:39.981 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:39.981 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:39.981 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:39.981 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.M7Tzoie8F3 00:19:39.981 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.M7Tzoie8F3 00:19:39.981 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:40.241 [2024-12-09 11:54:47.962989] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.241 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:40.502 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:40.502 [2024-12-09 11:54:48.263703] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:40.502 [2024-12-09 11:54:48.263892] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.502 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:40.763 malloc0 00:19:40.763 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:40.763 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.M7Tzoie8F3 00:19:41.024 [2024-12-09 11:54:48.726663] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.M7Tzoie8F3': 0100666 00:19:41.024 [2024-12-09 11:54:48.726684] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:41.024 request: 00:19:41.024 { 00:19:41.024 "name": "key0", 00:19:41.024 "path": "/tmp/tmp.M7Tzoie8F3", 00:19:41.024 "method": "keyring_file_add_key", 00:19:41.024 "req_id": 1 00:19:41.024 } 00:19:41.024 Got JSON-RPC error response 00:19:41.024 response: 00:19:41.024 { 00:19:41.024 "code": -1, 00:19:41.024 "message": "Operation not permitted" 00:19:41.024 } 00:19:41.024 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:41.024 [2024-12-09 11:54:48.879053] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:41.024 [2024-12-09 11:54:48.879082] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:41.024 request: 00:19:41.024 { 00:19:41.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.024 "host": "nqn.2016-06.io.spdk:host1", 00:19:41.024 "psk": "key0", 00:19:41.024 "method": "nvmf_subsystem_add_host", 00:19:41.024 "req_id": 1 00:19:41.024 } 00:19:41.024 Got JSON-RPC error response 00:19:41.024 response: 00:19:41.024 { 00:19:41.024 "code": -32603, 00:19:41.024 "message": "Internal error" 00:19:41.024 } 00:19:41.024 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:41.024 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:41.024 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:41.024 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:41.024 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 64910 00:19:41.024 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 64910 ']' 00:19:41.024 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 64910 00:19:41.024 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:41.024 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.024 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64910 00:19:41.284 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:41.284 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:41.284 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64910' 00:19:41.284 killing process with pid 64910 00:19:41.284 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 64910 00:19:41.284 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 64910 00:19:41.285 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.M7Tzoie8F3 00:19:41.285 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:41.285 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:41.285 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:41.285 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.285 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:41.285 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=65225 00:19:41.285 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 65225 00:19:41.285 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 65225 ']' 00:19:41.285 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.285 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.285 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.285 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.285 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.285 [2024-12-09 11:54:49.103379] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:19:41.285 [2024-12-09 11:54:49.103424] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.285 [2024-12-09 11:54:49.156623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.546 [2024-12-09 11:54:49.184748] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.546 [2024-12-09 11:54:49.184775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.546 [2024-12-09 11:54:49.184780] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.546 [2024-12-09 11:54:49.184785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.546 [2024-12-09 11:54:49.184789] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.546 [2024-12-09 11:54:49.185216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.546 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:41.546 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:41.546 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:41.546 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:41.546 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.546 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.546 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.M7Tzoie8F3 00:19:41.546 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.M7Tzoie8F3 00:19:41.546 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:41.806 [2024-12-09 11:54:49.435798] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.806 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:41.806 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:42.067 [2024-12-09 11:54:49.772617] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:42.067 [2024-12-09 11:54:49.772817] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.067 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:42.067 malloc0 00:19:42.067 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:42.328 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.M7Tzoie8F3 00:19:42.590 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:42.590 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:42.590 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=65581 00:19:42.590 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:42.590 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 65581 /var/tmp/bdevperf.sock 00:19:42.590 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 65581 ']' 00:19:42.590 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.590 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.590 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.590 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.590 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.590 [2024-12-09 11:54:50.431971] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:19:42.590 [2024-12-09 11:54:50.432013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65581 ] 00:19:42.855 [2024-12-09 11:54:50.481044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.855 [2024-12-09 11:54:50.510077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.855 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.855 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:42.855 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.M7Tzoie8F3 00:19:43.115 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:43.115 [2024-12-09 11:54:50.888386] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:43.115 TLSTESTn1 00:19:43.115 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:43.375 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:43.375 "subsystems": [ 00:19:43.375 { 00:19:43.375 "subsystem": "keyring", 00:19:43.375 "config": [ 00:19:43.375 { 00:19:43.375 "method": "keyring_file_add_key", 00:19:43.375 "params": { 00:19:43.375 "name": "key0", 00:19:43.375 "path": "/tmp/tmp.M7Tzoie8F3" 00:19:43.375 } 00:19:43.375 } 00:19:43.375 ] 00:19:43.375 }, 00:19:43.375 { 00:19:43.375 "subsystem": "iobuf", 00:19:43.375 "config": [ 00:19:43.375 { 00:19:43.375 "method": "iobuf_set_options", 00:19:43.375 "params": { 00:19:43.375 "small_pool_count": 8192, 00:19:43.375 "large_pool_count": 1024, 00:19:43.375 "small_bufsize": 8192, 00:19:43.375 "large_bufsize": 135168, 00:19:43.375 "enable_numa": false 00:19:43.375 } 00:19:43.375 } 00:19:43.375 ] 00:19:43.375 }, 00:19:43.375 { 00:19:43.375 "subsystem": "sock", 00:19:43.375 "config": [ 00:19:43.375 { 00:19:43.375 "method": "sock_set_default_impl", 00:19:43.375 "params": { 00:19:43.375 "impl_name": "posix" 00:19:43.375 } 00:19:43.375 }, 00:19:43.375 { 00:19:43.375 "method": "sock_impl_set_options", 00:19:43.375 "params": { 00:19:43.375 "impl_name": "ssl", 00:19:43.375 "recv_buf_size": 4096, 00:19:43.375 "send_buf_size": 4096, 00:19:43.375 "enable_recv_pipe": true, 00:19:43.375 "enable_quickack": false, 00:19:43.375 "enable_placement_id": 0, 00:19:43.375 "enable_zerocopy_send_server": true, 00:19:43.375 "enable_zerocopy_send_client": false, 00:19:43.375 "zerocopy_threshold": 0, 00:19:43.375 "tls_version": 0, 00:19:43.375 "enable_ktls": false 00:19:43.375 } 00:19:43.375 }, 00:19:43.375 { 00:19:43.375 "method": "sock_impl_set_options", 00:19:43.375 "params": { 00:19:43.375 "impl_name": "posix", 00:19:43.375 "recv_buf_size": 2097152, 00:19:43.375 "send_buf_size": 2097152, 00:19:43.375 "enable_recv_pipe": true, 00:19:43.375 "enable_quickack": false, 00:19:43.375 "enable_placement_id": 0, 00:19:43.375 "enable_zerocopy_send_server": true, 00:19:43.375 "enable_zerocopy_send_client": false, 00:19:43.375 "zerocopy_threshold": 0, 00:19:43.375 "tls_version": 0, 00:19:43.375 "enable_ktls": false 00:19:43.375 } 00:19:43.375 } 00:19:43.375 ] 00:19:43.375 }, 00:19:43.375 { 00:19:43.375 "subsystem": "vmd", 00:19:43.375 "config": [] 00:19:43.375 }, 00:19:43.375 { 00:19:43.375 "subsystem": "accel", 00:19:43.375 "config": [ 00:19:43.375 { 00:19:43.375 "method": "accel_set_options", 00:19:43.375 "params": { 00:19:43.375 "small_cache_size": 128, 00:19:43.375 "large_cache_size": 16, 00:19:43.375 "task_count": 2048, 00:19:43.375 "sequence_count": 2048, 00:19:43.375 "buf_count": 2048 00:19:43.375 } 00:19:43.375 } 00:19:43.375 ] 00:19:43.375 }, 00:19:43.375 { 00:19:43.375 "subsystem": "bdev", 00:19:43.375 "config": [ 00:19:43.375 { 00:19:43.375 "method": "bdev_set_options", 00:19:43.375 "params": { 00:19:43.375 "bdev_io_pool_size": 65535, 00:19:43.375 "bdev_io_cache_size": 256, 00:19:43.375 "bdev_auto_examine": true, 00:19:43.375 "iobuf_small_cache_size": 128, 00:19:43.375 "iobuf_large_cache_size": 16 00:19:43.375 } 00:19:43.375 }, 00:19:43.375 { 00:19:43.375 "method": "bdev_raid_set_options", 00:19:43.375 "params": { 00:19:43.375 "process_window_size_kb": 1024, 00:19:43.375 "process_max_bandwidth_mb_sec": 0 00:19:43.375 } 00:19:43.375 }, 00:19:43.375 { 00:19:43.375 "method": "bdev_iscsi_set_options", 00:19:43.375 "params": { 00:19:43.375 "timeout_sec": 30 00:19:43.375 } 00:19:43.375 }, 00:19:43.375 { 00:19:43.375 "method": "bdev_nvme_set_options", 00:19:43.375 "params": { 00:19:43.375 "action_on_timeout": "none", 00:19:43.376 "timeout_us": 0, 00:19:43.376 "timeout_admin_us": 0, 00:19:43.376 "keep_alive_timeout_ms": 10000, 00:19:43.376 "arbitration_burst": 0, 00:19:43.376 "low_priority_weight": 0, 00:19:43.376 "medium_priority_weight": 0, 00:19:43.376 "high_priority_weight": 0, 00:19:43.376 "nvme_adminq_poll_period_us": 10000, 00:19:43.376 "nvme_ioq_poll_period_us": 0, 00:19:43.376 "io_queue_requests": 0, 00:19:43.376 "delay_cmd_submit": true, 00:19:43.376 "transport_retry_count": 4, 00:19:43.376 "bdev_retry_count": 3, 00:19:43.376 "transport_ack_timeout": 0, 00:19:43.376 "ctrlr_loss_timeout_sec": 0, 00:19:43.376 "reconnect_delay_sec": 0, 00:19:43.376 "fast_io_fail_timeout_sec": 0, 00:19:43.376 "disable_auto_failback": false, 00:19:43.376 "generate_uuids": false, 00:19:43.376 "transport_tos": 0, 00:19:43.376 "nvme_error_stat": false, 00:19:43.376 "rdma_srq_size": 0, 00:19:43.376 "io_path_stat": false, 00:19:43.376 "allow_accel_sequence": false, 00:19:43.376 "rdma_max_cq_size": 0, 00:19:43.376 "rdma_cm_event_timeout_ms": 0, 00:19:43.376 "dhchap_digests": [ 00:19:43.376 "sha256", 00:19:43.376 "sha384", 00:19:43.376 "sha512" 00:19:43.376 ], 00:19:43.376 "dhchap_dhgroups": [ 00:19:43.376 "null", 00:19:43.376 "ffdhe2048", 00:19:43.376 "ffdhe3072", 00:19:43.376 "ffdhe4096", 00:19:43.376 "ffdhe6144", 00:19:43.376 "ffdhe8192" 00:19:43.376 ] 00:19:43.376 } 00:19:43.376 }, 00:19:43.376 { 00:19:43.376 "method": "bdev_nvme_set_hotplug", 00:19:43.376 "params": { 00:19:43.376 "period_us": 100000, 00:19:43.376 "enable": false 00:19:43.376 } 00:19:43.376 }, 00:19:43.376 { 00:19:43.376 "method": "bdev_malloc_create", 00:19:43.376 "params": { 00:19:43.376 "name": "malloc0", 00:19:43.376 "num_blocks": 8192, 00:19:43.376 "block_size": 4096, 00:19:43.376 "physical_block_size": 4096, 00:19:43.376 "uuid": "2cc1f833-bb9e-4c94-83fa-896c9b71a98d", 00:19:43.376 "optimal_io_boundary": 0, 00:19:43.376 "md_size": 0, 00:19:43.376 "dif_type": 0, 00:19:43.376 "dif_is_head_of_md": false, 00:19:43.376 "dif_pi_format": 0 00:19:43.376 } 00:19:43.376 }, 00:19:43.376 { 00:19:43.376 "method": "bdev_wait_for_examine" 00:19:43.376 } 00:19:43.376 ] 00:19:43.376 }, 00:19:43.376 { 00:19:43.376 "subsystem": "nbd", 00:19:43.376 "config": [] 00:19:43.376 }, 00:19:43.376 { 00:19:43.376 "subsystem": "scheduler", 00:19:43.376 "config": [ 00:19:43.376 { 00:19:43.376 "method": "framework_set_scheduler", 00:19:43.376 "params": { 00:19:43.376 "name": "static" 00:19:43.376 } 00:19:43.376 } 00:19:43.376 ] 00:19:43.376 }, 00:19:43.376 { 00:19:43.376 "subsystem": "nvmf", 00:19:43.376 "config": [ 00:19:43.376 { 00:19:43.376 "method": "nvmf_set_config", 00:19:43.376 "params": { 00:19:43.376 "discovery_filter": "match_any", 00:19:43.376 "admin_cmd_passthru": { 00:19:43.376 "identify_ctrlr": false 00:19:43.376 }, 00:19:43.376 "dhchap_digests": [ 00:19:43.376 "sha256", 00:19:43.376 "sha384", 00:19:43.376 "sha512" 00:19:43.376 ], 00:19:43.376 "dhchap_dhgroups": [ 00:19:43.376 "null", 00:19:43.376 "ffdhe2048", 00:19:43.376 "ffdhe3072", 00:19:43.376 "ffdhe4096", 00:19:43.376 "ffdhe6144", 00:19:43.376 "ffdhe8192" 00:19:43.376 ] 00:19:43.376 } 00:19:43.376 }, 00:19:43.376 { 00:19:43.376 "method": "nvmf_set_max_subsystems", 00:19:43.376 "params": { 00:19:43.376 "max_subsystems": 1024 00:19:43.376 } 00:19:43.376 }, 00:19:43.376 { 00:19:43.376 "method": "nvmf_set_crdt", 00:19:43.376 "params": { 00:19:43.376 "crdt1": 0, 00:19:43.376 "crdt2": 0, 00:19:43.376 "crdt3": 0 00:19:43.376 } 00:19:43.376 }, 00:19:43.376 { 00:19:43.376 "method": "nvmf_create_transport", 00:19:43.376 "params": { 00:19:43.376 "trtype": "TCP", 00:19:43.376 "max_queue_depth": 128, 00:19:43.376 "max_io_qpairs_per_ctrlr": 127, 00:19:43.376 "in_capsule_data_size": 4096, 00:19:43.376 "max_io_size": 131072, 00:19:43.376 "io_unit_size": 131072, 00:19:43.376 "max_aq_depth": 128, 00:19:43.376 "num_shared_buffers": 511, 00:19:43.376 "buf_cache_size": 4294967295, 00:19:43.376 "dif_insert_or_strip": false, 00:19:43.376 "zcopy": false, 00:19:43.376 "c2h_success": false, 00:19:43.376 "sock_priority": 0, 00:19:43.376 "abort_timeout_sec": 1, 00:19:43.376 "ack_timeout": 0, 00:19:43.376 "data_wr_pool_size": 0 00:19:43.376 } 00:19:43.376 }, 00:19:43.376 { 00:19:43.376 "method": "nvmf_create_subsystem", 00:19:43.376 "params": { 00:19:43.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.376 "allow_any_host": false, 00:19:43.376 "serial_number": "SPDK00000000000001", 00:19:43.376 "model_number": "SPDK bdev Controller", 00:19:43.376 "max_namespaces": 10, 00:19:43.376 "min_cntlid": 1, 00:19:43.376 "max_cntlid": 65519, 00:19:43.376 "ana_reporting": false 00:19:43.376 } 00:19:43.376 }, 00:19:43.376 { 00:19:43.376 "method": "nvmf_subsystem_add_host", 00:19:43.376 "params": { 00:19:43.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.376 "host": "nqn.2016-06.io.spdk:host1", 00:19:43.376 "psk": "key0" 00:19:43.376 } 00:19:43.376 }, 00:19:43.376 { 00:19:43.376 "method": "nvmf_subsystem_add_ns", 00:19:43.376 "params": { 00:19:43.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.376 "namespace": { 00:19:43.376 "nsid": 1, 00:19:43.376 "bdev_name": "malloc0", 00:19:43.376 "nguid": "2CC1F833BB9E4C9483FA896C9B71A98D", 00:19:43.376 "uuid": "2cc1f833-bb9e-4c94-83fa-896c9b71a98d", 00:19:43.376 "no_auto_visible": false 00:19:43.376 } 00:19:43.376 } 00:19:43.376 }, 00:19:43.376 { 00:19:43.376 "method": "nvmf_subsystem_add_listener", 00:19:43.376 "params": { 00:19:43.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.376 "listen_address": { 00:19:43.376 "trtype": "TCP", 00:19:43.376 "adrfam": "IPv4", 00:19:43.376 "traddr": "10.0.0.2", 00:19:43.376 "trsvcid": "4420" 00:19:43.376 }, 00:19:43.376 "secure_channel": true 00:19:43.376 } 00:19:43.376 } 00:19:43.376 ] 00:19:43.376 } 00:19:43.376 ] 00:19:43.376 }' 00:19:43.376 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:43.636 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:43.637 "subsystems": [ 00:19:43.637 { 00:19:43.637 "subsystem": "keyring", 00:19:43.637 "config": [ 00:19:43.637 { 00:19:43.637 "method": "keyring_file_add_key", 00:19:43.637 "params": { 00:19:43.637 "name": "key0", 00:19:43.637 "path": "/tmp/tmp.M7Tzoie8F3" 00:19:43.637 } 00:19:43.637 } 00:19:43.637 ] 00:19:43.637 }, 00:19:43.637 { 00:19:43.637 "subsystem": "iobuf", 00:19:43.637 "config": [ 00:19:43.637 { 00:19:43.637 "method": "iobuf_set_options", 00:19:43.637 "params": { 00:19:43.637 "small_pool_count": 8192, 00:19:43.637 "large_pool_count": 1024, 00:19:43.637 "small_bufsize": 8192, 00:19:43.637 "large_bufsize": 135168, 00:19:43.637 "enable_numa": false 00:19:43.637 } 00:19:43.637 } 00:19:43.637 ] 00:19:43.637 }, 00:19:43.637 { 00:19:43.637 "subsystem": "sock", 00:19:43.637 "config": [ 00:19:43.637 { 00:19:43.637 "method": "sock_set_default_impl", 00:19:43.637 "params": { 00:19:43.637 "impl_name": "posix" 00:19:43.637 } 00:19:43.637 }, 00:19:43.637 { 00:19:43.637 "method": "sock_impl_set_options", 00:19:43.637 "params": { 00:19:43.637 "impl_name": "ssl", 00:19:43.637 "recv_buf_size": 4096, 00:19:43.637 "send_buf_size": 4096, 00:19:43.637 "enable_recv_pipe": true, 00:19:43.637 "enable_quickack": false, 00:19:43.637 "enable_placement_id": 0, 00:19:43.637 "enable_zerocopy_send_server": true, 00:19:43.637 "enable_zerocopy_send_client": false, 00:19:43.637 "zerocopy_threshold": 0, 00:19:43.637 "tls_version": 0, 00:19:43.637 "enable_ktls": false 00:19:43.637 } 00:19:43.637 }, 00:19:43.637 { 00:19:43.637 "method": "sock_impl_set_options", 00:19:43.637 "params": { 00:19:43.637 "impl_name": "posix", 00:19:43.637 "recv_buf_size": 2097152, 00:19:43.637 "send_buf_size": 2097152, 00:19:43.637 "enable_recv_pipe": true, 00:19:43.637 "enable_quickack": false, 00:19:43.637 "enable_placement_id": 0, 00:19:43.637 "enable_zerocopy_send_server": true, 00:19:43.637 "enable_zerocopy_send_client": false, 00:19:43.637 "zerocopy_threshold": 0, 00:19:43.637 "tls_version": 0, 00:19:43.637 "enable_ktls": false 00:19:43.637 } 00:19:43.637 } 00:19:43.637 ] 00:19:43.637 }, 00:19:43.637 { 00:19:43.637 "subsystem": "vmd", 00:19:43.637 "config": [] 00:19:43.637 }, 00:19:43.637 { 00:19:43.637 "subsystem": "accel", 00:19:43.637 "config": [ 00:19:43.637 { 00:19:43.637 "method": "accel_set_options", 00:19:43.637 "params": { 00:19:43.637 "small_cache_size": 128, 00:19:43.637 "large_cache_size": 16, 00:19:43.637 "task_count": 2048, 00:19:43.637 "sequence_count": 2048, 00:19:43.637 "buf_count": 2048 00:19:43.637 } 00:19:43.637 } 00:19:43.637 ] 00:19:43.637 }, 00:19:43.637 { 00:19:43.637 "subsystem": "bdev", 00:19:43.637 "config": [ 00:19:43.637 { 00:19:43.637 "method": "bdev_set_options", 00:19:43.637 "params": { 00:19:43.637 "bdev_io_pool_size": 65535, 00:19:43.637 "bdev_io_cache_size": 256, 00:19:43.637 "bdev_auto_examine": true, 00:19:43.637 "iobuf_small_cache_size": 128, 00:19:43.637 "iobuf_large_cache_size": 16 00:19:43.637 } 00:19:43.637 }, 00:19:43.637 { 00:19:43.637 "method": "bdev_raid_set_options", 00:19:43.637 "params": { 00:19:43.637 "process_window_size_kb": 1024, 00:19:43.637 "process_max_bandwidth_mb_sec": 0 00:19:43.637 } 00:19:43.637 }, 00:19:43.637 { 00:19:43.637 "method": "bdev_iscsi_set_options", 00:19:43.637 "params": { 00:19:43.637 "timeout_sec": 30 00:19:43.637 } 00:19:43.637 }, 00:19:43.637 { 00:19:43.637 "method": "bdev_nvme_set_options", 00:19:43.637 "params": { 00:19:43.637 "action_on_timeout": "none", 00:19:43.637 "timeout_us": 0, 00:19:43.637 "timeout_admin_us": 0, 00:19:43.637 "keep_alive_timeout_ms": 10000, 00:19:43.637 "arbitration_burst": 0, 00:19:43.637 "low_priority_weight": 0, 00:19:43.637 "medium_priority_weight": 0, 00:19:43.637 "high_priority_weight": 0, 00:19:43.637 "nvme_adminq_poll_period_us": 10000, 00:19:43.637 "nvme_ioq_poll_period_us": 0, 00:19:43.637 "io_queue_requests": 512, 00:19:43.637 "delay_cmd_submit": true, 00:19:43.637 "transport_retry_count": 4, 00:19:43.637 "bdev_retry_count": 3, 00:19:43.637 "transport_ack_timeout": 0, 00:19:43.637 "ctrlr_loss_timeout_sec": 0, 00:19:43.637 "reconnect_delay_sec": 0, 00:19:43.637 "fast_io_fail_timeout_sec": 0, 00:19:43.637 "disable_auto_failback": false, 00:19:43.637 "generate_uuids": false, 00:19:43.637 "transport_tos": 0, 00:19:43.637 "nvme_error_stat": false, 00:19:43.637 "rdma_srq_size": 0, 00:19:43.637 "io_path_stat": false, 00:19:43.637 "allow_accel_sequence": false, 00:19:43.637 "rdma_max_cq_size": 0, 00:19:43.637 "rdma_cm_event_timeout_ms": 0, 00:19:43.637 "dhchap_digests": [ 00:19:43.637 "sha256", 00:19:43.637 "sha384", 00:19:43.637 "sha512" 00:19:43.637 ], 00:19:43.637 "dhchap_dhgroups": [ 00:19:43.637 "null", 00:19:43.637 "ffdhe2048", 00:19:43.637 "ffdhe3072", 00:19:43.637 "ffdhe4096", 00:19:43.637 "ffdhe6144", 00:19:43.637 "ffdhe8192" 00:19:43.637 ] 00:19:43.637 } 00:19:43.637 }, 00:19:43.637 { 00:19:43.637 "method": "bdev_nvme_attach_controller", 00:19:43.637 "params": { 00:19:43.637 "name": "TLSTEST", 00:19:43.637 "trtype": "TCP", 00:19:43.637 "adrfam": "IPv4", 00:19:43.637 "traddr": "10.0.0.2", 00:19:43.637 "trsvcid": "4420", 00:19:43.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.637 "prchk_reftag": false, 00:19:43.637 "prchk_guard": false, 00:19:43.637 "ctrlr_loss_timeout_sec": 0, 00:19:43.637 "reconnect_delay_sec": 0, 00:19:43.637 "fast_io_fail_timeout_sec": 0, 00:19:43.637 "psk": "key0", 00:19:43.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.637 "hdgst": false, 00:19:43.637 "ddgst": false, 00:19:43.637 "multipath": "multipath" 00:19:43.637 } 00:19:43.637 }, 00:19:43.637 { 00:19:43.637 "method": "bdev_nvme_set_hotplug", 00:19:43.637 "params": { 00:19:43.637 "period_us": 100000, 00:19:43.637 "enable": false 00:19:43.637 } 00:19:43.637 }, 00:19:43.637 { 00:19:43.637 "method": "bdev_wait_for_examine" 00:19:43.637 } 00:19:43.637 ] 00:19:43.637 }, 00:19:43.637 { 00:19:43.637 "subsystem": "nbd", 00:19:43.637 "config": [] 00:19:43.637 } 00:19:43.637 ] 00:19:43.637 }' 00:19:43.637 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 65581 00:19:43.637 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 65581 ']' 00:19:43.637 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 65581 00:19:43.637 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:43.637 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.637 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65581 00:19:43.897 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:43.897 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:43.897 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65581' 00:19:43.897 killing process with pid 65581 00:19:43.897 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 65581 00:19:43.897 Received shutdown signal, test time was about 10.000000 seconds 00:19:43.897 00:19:43.897 Latency(us) 00:19:43.897 [2024-12-09T10:54:51.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.897 [2024-12-09T10:54:51.783Z] =================================================================================================================== 00:19:43.897 [2024-12-09T10:54:51.783Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:43.897 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 65581 00:19:43.897 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 65225 00:19:43.897 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 65225 ']' 00:19:43.897 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 65225 00:19:43.897 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:43.897 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.897 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65225 00:19:43.897 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:43.898 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:43.898 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65225' 00:19:43.898 killing process with pid 65225 00:19:43.898 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 65225 00:19:43.898 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 65225 00:19:44.158 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:44.158 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:44.158 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:44.158 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.158 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:44.158 "subsystems": [ 00:19:44.158 { 00:19:44.158 "subsystem": "keyring", 00:19:44.158 "config": [ 00:19:44.158 { 00:19:44.158 "method": "keyring_file_add_key", 00:19:44.158 "params": { 00:19:44.158 "name": "key0", 00:19:44.158 "path": "/tmp/tmp.M7Tzoie8F3" 00:19:44.158 } 00:19:44.158 } 00:19:44.158 ] 00:19:44.158 }, 00:19:44.158 { 00:19:44.158 "subsystem": "iobuf", 00:19:44.158 "config": [ 00:19:44.158 { 00:19:44.158 "method": "iobuf_set_options", 00:19:44.158 "params": { 00:19:44.158 "small_pool_count": 8192, 00:19:44.158 "large_pool_count": 1024, 00:19:44.158 "small_bufsize": 8192, 00:19:44.158 "large_bufsize": 135168, 00:19:44.158 "enable_numa": false 00:19:44.158 } 00:19:44.158 } 00:19:44.158 ] 00:19:44.158 }, 00:19:44.158 { 00:19:44.158 "subsystem": "sock", 00:19:44.158 "config": [ 00:19:44.158 { 00:19:44.158 "method": "sock_set_default_impl", 00:19:44.158 "params": { 00:19:44.158 "impl_name": "posix" 00:19:44.158 } 00:19:44.158 }, 00:19:44.158 { 00:19:44.158 "method": "sock_impl_set_options", 00:19:44.158 "params": { 00:19:44.158 "impl_name": "ssl", 00:19:44.158 "recv_buf_size": 4096, 00:19:44.158 "send_buf_size": 4096, 00:19:44.158 "enable_recv_pipe": true, 00:19:44.158 "enable_quickack": false, 00:19:44.158 "enable_placement_id": 0, 00:19:44.158 "enable_zerocopy_send_server": true, 00:19:44.158 "enable_zerocopy_send_client": false, 00:19:44.158 "zerocopy_threshold": 0, 00:19:44.158 "tls_version": 0, 00:19:44.158 "enable_ktls": false 00:19:44.158 } 00:19:44.158 }, 00:19:44.158 { 00:19:44.158 "method": "sock_impl_set_options", 00:19:44.158 "params": { 00:19:44.158 "impl_name": "posix", 00:19:44.158 "recv_buf_size": 2097152, 00:19:44.158 "send_buf_size": 2097152, 00:19:44.158 "enable_recv_pipe": true, 00:19:44.158 "enable_quickack": false, 00:19:44.158 "enable_placement_id": 0, 00:19:44.158 "enable_zerocopy_send_server": true, 00:19:44.158 "enable_zerocopy_send_client": false, 00:19:44.158 "zerocopy_threshold": 0, 00:19:44.158 "tls_version": 0, 00:19:44.158 "enable_ktls": false 00:19:44.158 } 00:19:44.158 } 00:19:44.158 ] 00:19:44.158 }, 00:19:44.158 { 00:19:44.158 "subsystem": "vmd", 00:19:44.158 "config": [] 00:19:44.158 }, 00:19:44.158 { 00:19:44.158 "subsystem": "accel", 00:19:44.158 "config": [ 00:19:44.158 { 00:19:44.158 "method": "accel_set_options", 00:19:44.158 "params": { 00:19:44.158 "small_cache_size": 128, 00:19:44.158 "large_cache_size": 16, 00:19:44.158 "task_count": 2048, 00:19:44.158 "sequence_count": 2048, 00:19:44.158 "buf_count": 2048 00:19:44.158 } 00:19:44.158 } 00:19:44.158 ] 00:19:44.158 }, 00:19:44.158 { 00:19:44.158 "subsystem": "bdev", 00:19:44.158 "config": [ 00:19:44.158 { 00:19:44.158 "method": "bdev_set_options", 00:19:44.158 "params": { 00:19:44.158 "bdev_io_pool_size": 65535, 00:19:44.158 "bdev_io_cache_size": 256, 00:19:44.158 "bdev_auto_examine": true, 00:19:44.158 "iobuf_small_cache_size": 128, 00:19:44.158 "iobuf_large_cache_size": 16 00:19:44.158 } 00:19:44.158 }, 00:19:44.158 { 00:19:44.158 "method": "bdev_raid_set_options", 00:19:44.158 "params": { 00:19:44.158 "process_window_size_kb": 1024, 00:19:44.158 "process_max_bandwidth_mb_sec": 0 00:19:44.158 } 00:19:44.158 }, 00:19:44.159 { 00:19:44.159 "method": "bdev_iscsi_set_options", 00:19:44.159 "params": { 00:19:44.159 "timeout_sec": 30 00:19:44.159 } 00:19:44.159 }, 00:19:44.159 { 00:19:44.159 "method": "bdev_nvme_set_options", 00:19:44.159 "params": { 00:19:44.159 "action_on_timeout": "none", 00:19:44.159 "timeout_us": 0, 00:19:44.159 "timeout_admin_us": 0, 00:19:44.159 "keep_alive_timeout_ms": 10000, 00:19:44.159 "arbitration_burst": 0, 00:19:44.159 "low_priority_weight": 0, 00:19:44.159 "medium_priority_weight": 0, 00:19:44.159 "high_priority_weight": 0, 00:19:44.159 "nvme_adminq_poll_period_us": 10000, 00:19:44.159 "nvme_ioq_poll_period_us": 0, 00:19:44.159 "io_queue_requests": 0, 00:19:44.159 "delay_cmd_submit": true, 00:19:44.159 "transport_retry_count": 4, 00:19:44.159 "bdev_retry_count": 3, 00:19:44.159 "transport_ack_timeout": 0, 00:19:44.159 "ctrlr_loss_timeout_sec": 0, 00:19:44.159 "reconnect_delay_sec": 0, 00:19:44.159 "fast_io_fail_timeout_sec": 0, 00:19:44.159 "disable_auto_failback": false, 00:19:44.159 "generate_uuids": false, 00:19:44.159 "transport_tos": 0, 00:19:44.159 "nvme_error_stat": false, 00:19:44.159 "rdma_srq_size": 0, 00:19:44.159 "io_path_stat": false, 00:19:44.159 "allow_accel_sequence": false, 00:19:44.159 "rdma_max_cq_size": 0, 00:19:44.159 "rdma_cm_event_timeout_ms": 0, 00:19:44.159 "dhchap_digests": [ 00:19:44.159 "sha256", 00:19:44.159 "sha384", 00:19:44.159 "sha512" 00:19:44.159 ], 00:19:44.159 "dhchap_dhgroups": [ 00:19:44.159 "null", 00:19:44.159 "ffdhe2048", 00:19:44.159 "ffdhe3072", 00:19:44.159 "ffdhe4096", 00:19:44.159 "ffdhe6144", 00:19:44.159 "ffdhe8192" 00:19:44.159 ] 00:19:44.159 } 00:19:44.159 }, 00:19:44.159 { 00:19:44.159 "method": "bdev_nvme_set_hotplug", 00:19:44.159 "params": { 00:19:44.159 "period_us": 100000, 00:19:44.159 "enable": false 00:19:44.159 } 00:19:44.159 }, 00:19:44.159 { 00:19:44.159 "method": "bdev_malloc_create", 00:19:44.159 "params": { 00:19:44.159 "name": "malloc0", 00:19:44.159 "num_blocks": 8192, 00:19:44.159 "block_size": 4096, 00:19:44.159 "physical_block_size": 4096, 00:19:44.159 "uuid": "2cc1f833-bb9e-4c94-83fa-896c9b71a98d", 00:19:44.159 "optimal_io_boundary": 0, 00:19:44.159 "md_size": 0, 00:19:44.159 "dif_type": 0, 00:19:44.159 "dif_is_head_of_md": false, 00:19:44.159 "dif_pi_format": 0 00:19:44.159 } 00:19:44.159 }, 00:19:44.159 { 00:19:44.159 "method": "bdev_wait_for_examine" 00:19:44.159 } 00:19:44.159 ] 00:19:44.159 }, 00:19:44.159 { 00:19:44.159 "subsystem": "nbd", 00:19:44.159 "config": [] 00:19:44.159 }, 00:19:44.159 { 00:19:44.159 "subsystem": "scheduler", 00:19:44.159 "config": [ 00:19:44.159 { 00:19:44.159 "method": "framework_set_scheduler", 00:19:44.159 "params": { 00:19:44.159 "name": "static" 00:19:44.159 } 00:19:44.159 } 00:19:44.159 ] 00:19:44.159 }, 00:19:44.159 { 00:19:44.159 "subsystem": "nvmf", 00:19:44.159 "config": [ 00:19:44.159 { 00:19:44.159 "method": "nvmf_set_config", 00:19:44.159 "params": { 00:19:44.159 "discovery_filter": "match_any", 00:19:44.159 "admin_cmd_passthru": { 00:19:44.159 "identify_ctrlr": false 00:19:44.159 }, 00:19:44.159 "dhchap_digests": [ 00:19:44.159 "sha256", 00:19:44.159 "sha384", 00:19:44.159 "sha512" 00:19:44.159 ], 00:19:44.159 "dhchap_dhgroups": [ 00:19:44.159 "null", 00:19:44.159 "ffdhe2048", 00:19:44.159 "ffdhe3072", 00:19:44.159 "ffdhe4096", 00:19:44.159 "ffdhe6144", 00:19:44.159 "ffdhe8192" 00:19:44.159 ] 00:19:44.159 } 00:19:44.159 }, 00:19:44.159 { 00:19:44.159 "method": "nvmf_set_max_subsystems", 00:19:44.159 "params": { 00:19:44.159 "max_subsystems": 1024 00:19:44.159 } 00:19:44.159 }, 00:19:44.159 { 00:19:44.159 "method": "nvmf_set_crdt", 00:19:44.159 "params": { 00:19:44.159 "crdt1": 0, 00:19:44.159 "crdt2": 0, 00:19:44.159 "crdt3": 0 00:19:44.159 } 00:19:44.159 }, 00:19:44.159 { 00:19:44.159 "method": "nvmf_create_transport", 00:19:44.159 "params": { 00:19:44.159 "trtype": "TCP", 00:19:44.159 "max_queue_depth": 128, 00:19:44.159 "max_io_qpairs_per_ctrlr": 127, 00:19:44.159 "in_capsule_data_size": 4096, 00:19:44.159 "max_io_size": 131072, 00:19:44.159 "io_unit_size": 131072, 00:19:44.159 "max_aq_depth": 128, 00:19:44.159 "num_shared_buffers": 511, 00:19:44.159 "buf_cache_size": 4294967295, 00:19:44.159 "dif_insert_or_strip": false, 00:19:44.159 "zcopy": false, 00:19:44.159 "c2h_success": false, 00:19:44.159 "sock_priority": 0, 00:19:44.159 "abort_timeout_sec": 1, 00:19:44.159 "ack_timeout": 0, 00:19:44.159 "data_wr_pool_size": 0 00:19:44.159 } 00:19:44.159 }, 00:19:44.159 { 00:19:44.159 "method": "nvmf_create_subsystem", 00:19:44.159 "params": { 00:19:44.159 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.159 "allow_any_host": false, 00:19:44.159 "serial_number": "SPDK00000000000001", 00:19:44.159 "model_number": "SPDK bdev Controller", 00:19:44.159 "max_namespaces": 10, 00:19:44.159 "min_cntlid": 1, 00:19:44.159 "max_cntlid": 65519, 00:19:44.159 "ana_reporting": false 00:19:44.159 } 00:19:44.159 }, 00:19:44.159 { 00:19:44.159 "method": "nvmf_subsystem_add_host", 00:19:44.159 "params": { 00:19:44.159 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.159 "host": "nqn.2016-06.io.spdk:host1", 00:19:44.159 "psk": "key0" 00:19:44.159 } 00:19:44.159 }, 00:19:44.159 { 00:19:44.159 "method": "nvmf_subsystem_add_ns", 00:19:44.159 "params": { 00:19:44.159 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.159 "namespace": { 00:19:44.159 "nsid": 1, 00:19:44.159 "bdev_name": "malloc0", 00:19:44.159 "nguid": "2CC1F833BB9E4C9483FA896C9B71A98D", 00:19:44.159 "uuid": "2cc1f833-bb9e-4c94-83fa-896c9b71a98d", 00:19:44.159 "no_auto_visible": false 00:19:44.159 } 00:19:44.159 } 00:19:44.159 }, 00:19:44.159 { 00:19:44.159 "method": "nvmf_subsystem_add_listener", 00:19:44.159 "params": { 00:19:44.159 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.159 "listen_address": { 00:19:44.159 "trtype": "TCP", 00:19:44.159 "adrfam": "IPv4", 00:19:44.159 "traddr": "10.0.0.2", 00:19:44.159 "trsvcid": "4420" 00:19:44.159 }, 00:19:44.159 "secure_channel": true 00:19:44.159 } 00:19:44.159 } 00:19:44.159 ] 00:19:44.159 } 00:19:44.159 ] 00:19:44.159 }' 00:19:44.159 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=65935 00:19:44.159 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 65935 00:19:44.159 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:44.159 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 65935 ']' 00:19:44.159 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.159 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.159 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.159 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.159 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.159 [2024-12-09 11:54:51.898883] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:19:44.159 [2024-12-09 11:54:51.898944] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.159 [2024-12-09 11:54:51.987936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.159 [2024-12-09 11:54:52.017498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.159 [2024-12-09 11:54:52.017529] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.159 [2024-12-09 11:54:52.017534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.159 [2024-12-09 11:54:52.017542] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.159 [2024-12-09 11:54:52.017546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.159 [2024-12-09 11:54:52.018025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.419 [2024-12-09 11:54:52.211397] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.419 [2024-12-09 11:54:52.243421] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:44.419 [2024-12-09 11:54:52.243604] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.990 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.990 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:44.990 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:44.990 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:44.990 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.990 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.990 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=65996 00:19:44.990 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 65996 /var/tmp/bdevperf.sock 00:19:44.990 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 65996 ']' 00:19:44.990 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:44.990 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.990 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:44.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:44.990 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:44.990 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.990 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.990 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:44.990 "subsystems": [ 00:19:44.990 { 00:19:44.990 "subsystem": "keyring", 00:19:44.990 "config": [ 00:19:44.990 { 00:19:44.990 "method": "keyring_file_add_key", 00:19:44.990 "params": { 00:19:44.990 "name": "key0", 00:19:44.990 "path": "/tmp/tmp.M7Tzoie8F3" 00:19:44.990 } 00:19:44.990 } 00:19:44.990 ] 00:19:44.990 }, 00:19:44.990 { 00:19:44.990 "subsystem": "iobuf", 00:19:44.990 "config": [ 00:19:44.990 { 00:19:44.990 "method": "iobuf_set_options", 00:19:44.990 "params": { 00:19:44.990 "small_pool_count": 8192, 00:19:44.990 "large_pool_count": 1024, 00:19:44.990 "small_bufsize": 8192, 00:19:44.990 "large_bufsize": 135168, 00:19:44.990 "enable_numa": false 00:19:44.990 } 00:19:44.990 } 00:19:44.990 ] 00:19:44.990 }, 00:19:44.990 { 00:19:44.990 "subsystem": "sock", 00:19:44.990 "config": [ 00:19:44.990 { 00:19:44.990 "method": "sock_set_default_impl", 00:19:44.990 "params": { 00:19:44.990 "impl_name": "posix" 00:19:44.990 } 00:19:44.990 }, 00:19:44.990 { 00:19:44.990 "method": "sock_impl_set_options", 00:19:44.990 "params": { 00:19:44.990 "impl_name": "ssl", 00:19:44.990 "recv_buf_size": 4096, 00:19:44.990 "send_buf_size": 4096, 00:19:44.990 "enable_recv_pipe": true, 00:19:44.990 "enable_quickack": false, 00:19:44.990 "enable_placement_id": 0, 00:19:44.990 "enable_zerocopy_send_server": true, 00:19:44.990 "enable_zerocopy_send_client": false, 00:19:44.990 "zerocopy_threshold": 0, 00:19:44.990 "tls_version": 0, 00:19:44.990 "enable_ktls": false 00:19:44.990 } 00:19:44.990 }, 00:19:44.990 { 00:19:44.990 "method": "sock_impl_set_options", 00:19:44.990 "params": { 00:19:44.990 "impl_name": "posix", 00:19:44.990 "recv_buf_size": 2097152, 00:19:44.990 "send_buf_size": 2097152, 00:19:44.990 "enable_recv_pipe": true, 00:19:44.990 "enable_quickack": false, 00:19:44.990 "enable_placement_id": 0, 00:19:44.990 "enable_zerocopy_send_server": true, 00:19:44.990 "enable_zerocopy_send_client": false, 00:19:44.990 "zerocopy_threshold": 0, 00:19:44.990 "tls_version": 0, 00:19:44.990 "enable_ktls": false 00:19:44.990 } 00:19:44.990 } 00:19:44.990 ] 00:19:44.990 }, 00:19:44.990 { 00:19:44.990 "subsystem": "vmd", 00:19:44.990 "config": [] 00:19:44.990 }, 00:19:44.990 { 00:19:44.990 "subsystem": "accel", 00:19:44.990 "config": [ 00:19:44.990 { 00:19:44.990 "method": "accel_set_options", 00:19:44.990 "params": { 00:19:44.990 "small_cache_size": 128, 00:19:44.990 "large_cache_size": 16, 00:19:44.990 "task_count": 2048, 00:19:44.990 "sequence_count": 2048, 00:19:44.990 "buf_count": 2048 00:19:44.990 } 00:19:44.990 } 00:19:44.990 ] 00:19:44.990 }, 00:19:44.990 { 00:19:44.990 "subsystem": "bdev", 00:19:44.990 "config": [ 00:19:44.990 { 00:19:44.990 "method": "bdev_set_options", 00:19:44.990 "params": { 00:19:44.990 "bdev_io_pool_size": 65535, 00:19:44.990 "bdev_io_cache_size": 256, 00:19:44.990 "bdev_auto_examine": true, 00:19:44.990 "iobuf_small_cache_size": 128, 00:19:44.990 "iobuf_large_cache_size": 16 00:19:44.990 } 00:19:44.990 }, 00:19:44.990 { 00:19:44.990 "method": "bdev_raid_set_options", 00:19:44.990 "params": { 00:19:44.990 "process_window_size_kb": 1024, 00:19:44.990 "process_max_bandwidth_mb_sec": 0 00:19:44.990 } 00:19:44.990 }, 00:19:44.990 { 00:19:44.990 "method": "bdev_iscsi_set_options", 00:19:44.990 "params": { 00:19:44.990 "timeout_sec": 30 00:19:44.990 } 00:19:44.990 }, 00:19:44.990 { 00:19:44.990 "method": "bdev_nvme_set_options", 00:19:44.990 "params": { 00:19:44.990 "action_on_timeout": "none", 00:19:44.990 "timeout_us": 0, 00:19:44.990 "timeout_admin_us": 0, 00:19:44.990 "keep_alive_timeout_ms": 10000, 00:19:44.990 "arbitration_burst": 0, 00:19:44.990 "low_priority_weight": 0, 00:19:44.990 "medium_priority_weight": 0, 00:19:44.990 "high_priority_weight": 0, 00:19:44.990 "nvme_adminq_poll_period_us": 10000, 00:19:44.990 "nvme_ioq_poll_period_us": 0, 00:19:44.990 "io_queue_requests": 512, 00:19:44.990 "delay_cmd_submit": true, 00:19:44.990 "transport_retry_count": 4, 00:19:44.990 "bdev_retry_count": 3, 00:19:44.990 "transport_ack_timeout": 0, 00:19:44.990 "ctrlr_loss_timeout_sec": 0, 00:19:44.990 "reconnect_delay_sec": 0, 00:19:44.990 "fast_io_fail_timeout_sec": 0, 00:19:44.990 "disable_auto_failback": false, 00:19:44.990 "generate_uuids": false, 00:19:44.990 "transport_tos": 0, 00:19:44.990 "nvme_error_stat": false, 00:19:44.990 "rdma_srq_size": 0, 00:19:44.990 "io_path_stat": false, 00:19:44.990 "allow_accel_sequence": false, 00:19:44.990 "rdma_max_cq_size": 0, 00:19:44.990 "rdma_cm_event_timeout_ms": 0, 00:19:44.990 "dhchap_digests": [ 00:19:44.990 "sha256", 00:19:44.990 "sha384", 00:19:44.990 "sha512" 00:19:44.990 ], 00:19:44.991 "dhchap_dhgroups": [ 00:19:44.991 "null", 00:19:44.991 "ffdhe2048", 00:19:44.991 "ffdhe3072", 00:19:44.991 "ffdhe4096", 00:19:44.991 "ffdhe6144", 00:19:44.991 "ffdhe8192" 00:19:44.991 ] 00:19:44.991 } 00:19:44.991 }, 00:19:44.991 { 00:19:44.991 "method": "bdev_nvme_attach_controller", 00:19:44.991 "params": { 00:19:44.991 "name": "TLSTEST", 00:19:44.991 "trtype": "TCP", 00:19:44.991 "adrfam": "IPv4", 00:19:44.991 "traddr": "10.0.0.2", 00:19:44.991 "trsvcid": "4420", 00:19:44.991 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.991 "prchk_reftag": false, 00:19:44.991 "prchk_guard": false, 00:19:44.991 "ctrlr_loss_timeout_sec": 0, 00:19:44.991 "reconnect_delay_sec": 0, 00:19:44.991 "fast_io_fail_timeout_sec": 0, 00:19:44.991 "psk": "key0", 00:19:44.991 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:44.991 "hdgst": false, 00:19:44.991 "ddgst": false, 00:19:44.991 "multipath": "multipath" 00:19:44.991 } 00:19:44.991 }, 00:19:44.991 { 00:19:44.991 "method": "bdev_nvme_set_hotplug", 00:19:44.991 "params": { 00:19:44.991 "period_us": 100000, 00:19:44.991 "enable": false 00:19:44.991 } 00:19:44.991 }, 00:19:44.991 { 00:19:44.991 "method": "bdev_wait_for_examine" 00:19:44.991 } 00:19:44.991 ] 00:19:44.991 }, 00:19:44.991 { 00:19:44.991 "subsystem": "nbd", 00:19:44.991 "config": [] 00:19:44.991 } 00:19:44.991 ] 00:19:44.991 }' 00:19:44.991 [2024-12-09 11:54:52.772290] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:19:44.991 [2024-12-09 11:54:52.772346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65996 ] 00:19:44.991 [2024-12-09 11:54:52.828932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.991 [2024-12-09 11:54:52.858993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.251 [2024-12-09 11:54:52.994123] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:45.821 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.821 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:45.821 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:45.821 Running I/O for 10 seconds... 00:19:47.771 5974.00 IOPS, 23.34 MiB/s [2024-12-09T10:54:57.037Z] 6131.50 IOPS, 23.95 MiB/s [2024-12-09T10:54:57.975Z] 6151.00 IOPS, 24.03 MiB/s [2024-12-09T10:54:58.915Z] 6161.50 IOPS, 24.07 MiB/s [2024-12-09T10:54:59.859Z] 6073.60 IOPS, 23.73 MiB/s [2024-12-09T10:55:00.801Z] 5946.67 IOPS, 23.23 MiB/s [2024-12-09T10:55:01.743Z] 5769.14 IOPS, 22.54 MiB/s [2024-12-09T10:55:02.683Z] 5799.62 IOPS, 22.65 MiB/s [2024-12-09T10:55:04.066Z] 5846.00 IOPS, 22.84 MiB/s [2024-12-09T10:55:04.066Z] 5808.50 IOPS, 22.69 MiB/s 00:19:56.180 Latency(us) 00:19:56.180 [2024-12-09T10:55:04.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.180 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:56.180 Verification LBA range: start 0x0 length 0x2000 00:19:56.180 TLSTESTn1 : 10.02 5810.94 22.70 0.00 0.00 21992.35 4614.83 74711.04 00:19:56.180 [2024-12-09T10:55:04.066Z] =================================================================================================================== 00:19:56.180 [2024-12-09T10:55:04.066Z] Total : 5810.94 22.70 0.00 0.00 21992.35 4614.83 74711.04 00:19:56.180 { 00:19:56.180 "results": [ 00:19:56.180 { 00:19:56.180 "job": "TLSTESTn1", 00:19:56.180 "core_mask": "0x4", 00:19:56.180 "workload": "verify", 00:19:56.180 "status": "finished", 00:19:56.180 "verify_range": { 00:19:56.180 "start": 0, 00:19:56.180 "length": 8192 00:19:56.180 }, 00:19:56.180 "queue_depth": 128, 00:19:56.180 "io_size": 4096, 00:19:56.180 "runtime": 10.017651, 00:19:56.180 "iops": 5810.943104326553, 00:19:56.180 "mibps": 22.698996501275598, 00:19:56.180 "io_failed": 0, 00:19:56.180 "io_timeout": 0, 00:19:56.180 "avg_latency_us": 21992.35481527291, 00:19:56.180 "min_latency_us": 4614.826666666667, 00:19:56.180 "max_latency_us": 74711.04 00:19:56.180 } 00:19:56.180 ], 00:19:56.180 "core_count": 1 00:19:56.180 } 00:19:56.180 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:56.180 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 65996 00:19:56.180 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 65996 ']' 00:19:56.180 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 65996 00:19:56.180 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:56.180 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.180 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65996 00:19:56.180 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:56.180 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:56.180 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65996' 00:19:56.180 killing process with pid 65996 00:19:56.180 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 65996 00:19:56.180 Received shutdown signal, test time was about 10.000000 seconds 00:19:56.180 00:19:56.180 Latency(us) 00:19:56.180 [2024-12-09T10:55:04.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.180 [2024-12-09T10:55:04.066Z] =================================================================================================================== 00:19:56.180 [2024-12-09T10:55:04.066Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:56.180 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 65996 00:19:56.180 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 65935 00:19:56.180 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 65935 ']' 00:19:56.180 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 65935 00:19:56.180 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:56.180 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.180 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65935 00:19:56.180 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:56.180 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:56.180 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65935' 00:19:56.180 killing process with pid 65935 00:19:56.180 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 65935 00:19:56.180 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 65935 00:19:56.180 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:56.180 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:56.180 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:56.180 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.180 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=68298 00:19:56.180 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 68298 00:19:56.180 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:56.180 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 68298 ']' 00:19:56.180 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.180 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:56.180 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.180 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:56.180 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.441 [2024-12-09 11:55:04.104698] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:19:56.441 [2024-12-09 11:55:04.104755] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.441 [2024-12-09 11:55:04.194735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.441 [2024-12-09 11:55:04.229492] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.441 [2024-12-09 11:55:04.229528] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.441 [2024-12-09 11:55:04.229536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.441 [2024-12-09 11:55:04.229542] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.441 [2024-12-09 11:55:04.229548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.441 [2024-12-09 11:55:04.230108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.014 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.014 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:57.014 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:57.014 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:57.014 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.274 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.275 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.M7Tzoie8F3 00:19:57.275 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.M7Tzoie8F3 00:19:57.275 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:57.275 [2024-12-09 11:55:05.105508] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.275 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:57.536 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:57.797 [2024-12-09 11:55:05.474434] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:57.797 [2024-12-09 11:55:05.474801] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.797 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:57.797 malloc0 00:19:57.797 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:58.059 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.M7Tzoie8F3 00:19:58.319 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:58.319 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=68665 00:19:58.319 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:58.319 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:58.319 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 68665 /var/tmp/bdevperf.sock 00:19:58.319 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 68665 ']' 00:19:58.319 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.319 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.319 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.319 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.319 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.580 [2024-12-09 11:55:06.243565] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:19:58.580 [2024-12-09 11:55:06.243647] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68665 ] 00:19:58.580 [2024-12-09 11:55:06.330587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.580 [2024-12-09 11:55:06.364769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.520 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.520 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:59.521 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.M7Tzoie8F3 00:19:59.521 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:59.521 [2024-12-09 11:55:07.359879] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:59.780 nvme0n1 00:19:59.780 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:59.780 Running I/O for 1 seconds... 00:20:00.722 4072.00 IOPS, 15.91 MiB/s 00:20:00.722 Latency(us) 00:20:00.722 [2024-12-09T10:55:08.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.722 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:00.722 Verification LBA range: start 0x0 length 0x2000 00:20:00.722 nvme0n1 : 1.02 4131.82 16.14 0.00 0.00 30798.31 6417.07 77769.39 00:20:00.722 [2024-12-09T10:55:08.608Z] =================================================================================================================== 00:20:00.722 [2024-12-09T10:55:08.608Z] Total : 4131.82 16.14 0.00 0.00 30798.31 6417.07 77769.39 00:20:00.722 { 00:20:00.722 "results": [ 00:20:00.722 { 00:20:00.722 "job": "nvme0n1", 00:20:00.722 "core_mask": "0x2", 00:20:00.722 "workload": "verify", 00:20:00.722 "status": "finished", 00:20:00.722 "verify_range": { 00:20:00.722 "start": 0, 00:20:00.722 "length": 8192 00:20:00.722 }, 00:20:00.722 "queue_depth": 128, 00:20:00.722 "io_size": 4096, 00:20:00.722 "runtime": 1.016502, 00:20:00.722 "iops": 4131.816759829297, 00:20:00.722 "mibps": 16.13990921808319, 00:20:00.722 "io_failed": 0, 00:20:00.722 "io_timeout": 0, 00:20:00.722 "avg_latency_us": 30798.31486984127, 00:20:00.722 "min_latency_us": 6417.066666666667, 00:20:00.722 "max_latency_us": 77769.38666666667 00:20:00.722 } 00:20:00.722 ], 00:20:00.722 "core_count": 1 00:20:00.722 } 00:20:00.722 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 68665 00:20:00.722 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 68665 ']' 00:20:00.722 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 68665 00:20:00.722 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:00.722 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.722 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68665 00:20:00.983 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:00.983 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:00.983 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68665' 00:20:00.983 killing process with pid 68665 00:20:00.983 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 68665 00:20:00.983 Received shutdown signal, test time was about 1.000000 seconds 00:20:00.983 00:20:00.983 Latency(us) 00:20:00.983 [2024-12-09T10:55:08.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.983 [2024-12-09T10:55:08.869Z] =================================================================================================================== 00:20:00.983 [2024-12-09T10:55:08.869Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:00.983 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 68665 00:20:00.983 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 68298 00:20:00.983 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 68298 ']' 00:20:00.983 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 68298 00:20:00.983 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:00.983 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.983 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68298 00:20:00.983 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:00.983 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:00.983 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68298' 00:20:00.983 killing process with pid 68298 00:20:00.983 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 68298 00:20:00.983 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 68298 00:20:01.244 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:01.244 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:01.244 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:01.244 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.244 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=69293 00:20:01.244 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 69293 00:20:01.244 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:01.244 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 69293 ']' 00:20:01.244 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.244 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.244 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.244 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.244 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.244 [2024-12-09 11:55:09.009714] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:20:01.244 [2024-12-09 11:55:09.009775] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.244 [2024-12-09 11:55:09.103858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.505 [2024-12-09 11:55:09.154295] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.505 [2024-12-09 11:55:09.154352] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.505 [2024-12-09 11:55:09.154361] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.505 [2024-12-09 11:55:09.154368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.505 [2024-12-09 11:55:09.154374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.505 [2024-12-09 11:55:09.155142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.077 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.077 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:02.077 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:02.077 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:02.077 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.077 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.077 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:02.077 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.077 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.077 [2024-12-09 11:55:09.865877] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.077 malloc0 00:20:02.077 [2024-12-09 11:55:09.895994] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:02.077 [2024-12-09 11:55:09.896324] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.077 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.077 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=69377 00:20:02.077 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 69377 /var/tmp/bdevperf.sock 00:20:02.077 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:02.077 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 69377 ']' 00:20:02.077 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:02.077 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:02.077 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:02.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:02.077 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:02.077 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.338 [2024-12-09 11:55:09.989472] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:20:02.338 [2024-12-09 11:55:09.989537] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69377 ] 00:20:02.338 [2024-12-09 11:55:10.079110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.338 [2024-12-09 11:55:10.114200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.909 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.909 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:02.909 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.M7Tzoie8F3 00:20:03.170 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:03.430 [2024-12-09 11:55:11.077764] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:03.430 nvme0n1 00:20:03.430 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:03.430 Running I/O for 1 seconds... 00:20:04.816 5460.00 IOPS, 21.33 MiB/s 00:20:04.816 Latency(us) 00:20:04.816 [2024-12-09T10:55:12.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.816 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:04.816 Verification LBA range: start 0x0 length 0x2000 00:20:04.816 nvme0n1 : 1.01 5523.92 21.58 0.00 0.00 23029.98 4696.75 24903.68 00:20:04.816 [2024-12-09T10:55:12.702Z] =================================================================================================================== 00:20:04.816 [2024-12-09T10:55:12.702Z] Total : 5523.92 21.58 0.00 0.00 23029.98 4696.75 24903.68 00:20:04.816 { 00:20:04.817 "results": [ 00:20:04.817 { 00:20:04.817 "job": "nvme0n1", 00:20:04.817 "core_mask": "0x2", 00:20:04.817 "workload": "verify", 00:20:04.817 "status": "finished", 00:20:04.817 "verify_range": { 00:20:04.817 "start": 0, 00:20:04.817 "length": 8192 00:20:04.817 }, 00:20:04.817 "queue_depth": 128, 00:20:04.817 "io_size": 4096, 00:20:04.817 "runtime": 1.0116, 00:20:04.817 "iops": 5523.922499011467, 00:20:04.817 "mibps": 21.577822261763544, 00:20:04.817 "io_failed": 0, 00:20:04.817 "io_timeout": 0, 00:20:04.817 "avg_latency_us": 23029.98478644715, 00:20:04.817 "min_latency_us": 4696.746666666667, 00:20:04.817 "max_latency_us": 24903.68 00:20:04.817 } 00:20:04.817 ], 00:20:04.817 "core_count": 1 00:20:04.817 } 00:20:04.817 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:04.817 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.817 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.817 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.817 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:04.817 "subsystems": [ 00:20:04.817 { 00:20:04.817 "subsystem": "keyring", 00:20:04.817 "config": [ 00:20:04.817 { 00:20:04.817 "method": "keyring_file_add_key", 00:20:04.817 "params": { 00:20:04.817 "name": "key0", 00:20:04.817 "path": "/tmp/tmp.M7Tzoie8F3" 00:20:04.817 } 00:20:04.817 } 00:20:04.817 ] 00:20:04.817 }, 00:20:04.817 { 00:20:04.817 "subsystem": "iobuf", 00:20:04.817 "config": [ 00:20:04.817 { 00:20:04.817 "method": "iobuf_set_options", 00:20:04.817 "params": { 00:20:04.817 "small_pool_count": 8192, 00:20:04.817 "large_pool_count": 1024, 00:20:04.817 "small_bufsize": 8192, 00:20:04.817 "large_bufsize": 135168, 00:20:04.817 "enable_numa": false 00:20:04.817 } 00:20:04.817 } 00:20:04.817 ] 00:20:04.817 }, 00:20:04.817 { 00:20:04.817 "subsystem": "sock", 00:20:04.817 "config": [ 00:20:04.817 { 00:20:04.817 "method": "sock_set_default_impl", 00:20:04.817 "params": { 00:20:04.817 "impl_name": "posix" 00:20:04.817 } 00:20:04.817 }, 00:20:04.817 { 00:20:04.817 "method": "sock_impl_set_options", 00:20:04.817 "params": { 00:20:04.817 "impl_name": "ssl", 00:20:04.817 "recv_buf_size": 4096, 00:20:04.817 "send_buf_size": 4096, 00:20:04.817 "enable_recv_pipe": true, 00:20:04.817 "enable_quickack": false, 00:20:04.817 "enable_placement_id": 0, 00:20:04.817 "enable_zerocopy_send_server": true, 00:20:04.817 "enable_zerocopy_send_client": false, 00:20:04.817 "zerocopy_threshold": 0, 00:20:04.817 "tls_version": 0, 00:20:04.817 "enable_ktls": false 00:20:04.817 } 00:20:04.817 }, 00:20:04.817 { 00:20:04.817 "method": "sock_impl_set_options", 00:20:04.817 "params": { 00:20:04.817 "impl_name": "posix", 00:20:04.817 "recv_buf_size": 2097152, 00:20:04.817 "send_buf_size": 2097152, 00:20:04.817 "enable_recv_pipe": true, 00:20:04.817 "enable_quickack": false, 00:20:04.817 "enable_placement_id": 0, 00:20:04.817 "enable_zerocopy_send_server": true, 00:20:04.817 "enable_zerocopy_send_client": false, 00:20:04.817 "zerocopy_threshold": 0, 00:20:04.817 "tls_version": 0, 00:20:04.817 "enable_ktls": false 00:20:04.817 } 00:20:04.817 } 00:20:04.817 ] 00:20:04.817 }, 00:20:04.817 { 00:20:04.817 "subsystem": "vmd", 00:20:04.817 "config": [] 00:20:04.817 }, 00:20:04.817 { 00:20:04.817 "subsystem": "accel", 00:20:04.817 "config": [ 00:20:04.817 { 00:20:04.817 "method": "accel_set_options", 00:20:04.817 "params": { 00:20:04.817 "small_cache_size": 128, 00:20:04.817 "large_cache_size": 16, 00:20:04.817 "task_count": 2048, 00:20:04.817 "sequence_count": 2048, 00:20:04.817 "buf_count": 2048 00:20:04.817 } 00:20:04.817 } 00:20:04.817 ] 00:20:04.817 }, 00:20:04.817 { 00:20:04.817 "subsystem": "bdev", 00:20:04.817 "config": [ 00:20:04.817 { 00:20:04.817 "method": "bdev_set_options", 00:20:04.817 "params": { 00:20:04.817 "bdev_io_pool_size": 65535, 00:20:04.817 "bdev_io_cache_size": 256, 00:20:04.817 "bdev_auto_examine": true, 00:20:04.817 "iobuf_small_cache_size": 128, 00:20:04.817 "iobuf_large_cache_size": 16 00:20:04.817 } 00:20:04.817 }, 00:20:04.817 { 00:20:04.817 "method": "bdev_raid_set_options", 00:20:04.817 "params": { 00:20:04.817 "process_window_size_kb": 1024, 00:20:04.817 "process_max_bandwidth_mb_sec": 0 00:20:04.817 } 00:20:04.817 }, 00:20:04.817 { 00:20:04.817 "method": "bdev_iscsi_set_options", 00:20:04.817 "params": { 00:20:04.817 "timeout_sec": 30 00:20:04.817 } 00:20:04.817 }, 00:20:04.817 { 00:20:04.817 "method": "bdev_nvme_set_options", 00:20:04.817 "params": { 00:20:04.817 "action_on_timeout": "none", 00:20:04.817 "timeout_us": 0, 00:20:04.817 "timeout_admin_us": 0, 00:20:04.817 "keep_alive_timeout_ms": 10000, 00:20:04.817 "arbitration_burst": 0, 00:20:04.817 "low_priority_weight": 0, 00:20:04.817 "medium_priority_weight": 0, 00:20:04.817 "high_priority_weight": 0, 00:20:04.817 "nvme_adminq_poll_period_us": 10000, 00:20:04.817 "nvme_ioq_poll_period_us": 0, 00:20:04.817 "io_queue_requests": 0, 00:20:04.817 "delay_cmd_submit": true, 00:20:04.817 "transport_retry_count": 4, 00:20:04.817 "bdev_retry_count": 3, 00:20:04.817 "transport_ack_timeout": 0, 00:20:04.817 "ctrlr_loss_timeout_sec": 0, 00:20:04.817 "reconnect_delay_sec": 0, 00:20:04.817 "fast_io_fail_timeout_sec": 0, 00:20:04.817 "disable_auto_failback": false, 00:20:04.817 "generate_uuids": false, 00:20:04.817 "transport_tos": 0, 00:20:04.817 "nvme_error_stat": false, 00:20:04.817 "rdma_srq_size": 0, 00:20:04.817 "io_path_stat": false, 00:20:04.817 "allow_accel_sequence": false, 00:20:04.817 "rdma_max_cq_size": 0, 00:20:04.817 "rdma_cm_event_timeout_ms": 0, 00:20:04.817 "dhchap_digests": [ 00:20:04.817 "sha256", 00:20:04.817 "sha384", 00:20:04.817 "sha512" 00:20:04.817 ], 00:20:04.817 "dhchap_dhgroups": [ 00:20:04.817 "null", 00:20:04.817 "ffdhe2048", 00:20:04.817 "ffdhe3072", 00:20:04.817 "ffdhe4096", 00:20:04.817 "ffdhe6144", 00:20:04.817 "ffdhe8192" 00:20:04.817 ] 00:20:04.817 } 00:20:04.817 }, 00:20:04.817 { 00:20:04.817 "method": "bdev_nvme_set_hotplug", 00:20:04.817 "params": { 00:20:04.817 "period_us": 100000, 00:20:04.817 "enable": false 00:20:04.817 } 00:20:04.817 }, 00:20:04.817 { 00:20:04.817 "method": "bdev_malloc_create", 00:20:04.817 "params": { 00:20:04.817 "name": "malloc0", 00:20:04.817 "num_blocks": 8192, 00:20:04.817 "block_size": 4096, 00:20:04.817 "physical_block_size": 4096, 00:20:04.817 "uuid": "8923ab34-8d0a-4b54-8d85-d09c55a124bf", 00:20:04.817 "optimal_io_boundary": 0, 00:20:04.817 "md_size": 0, 00:20:04.817 "dif_type": 0, 00:20:04.817 "dif_is_head_of_md": false, 00:20:04.817 "dif_pi_format": 0 00:20:04.817 } 00:20:04.817 }, 00:20:04.817 { 00:20:04.817 "method": "bdev_wait_for_examine" 00:20:04.817 } 00:20:04.817 ] 00:20:04.817 }, 00:20:04.817 { 00:20:04.817 "subsystem": "nbd", 00:20:04.817 "config": [] 00:20:04.817 }, 00:20:04.817 { 00:20:04.817 "subsystem": "scheduler", 00:20:04.817 "config": [ 00:20:04.817 { 00:20:04.817 "method": "framework_set_scheduler", 00:20:04.817 "params": { 00:20:04.817 "name": "static" 00:20:04.817 } 00:20:04.817 } 00:20:04.817 ] 00:20:04.817 }, 00:20:04.817 { 00:20:04.817 "subsystem": "nvmf", 00:20:04.817 "config": [ 00:20:04.817 { 00:20:04.817 "method": "nvmf_set_config", 00:20:04.817 "params": { 00:20:04.817 "discovery_filter": "match_any", 00:20:04.817 "admin_cmd_passthru": { 00:20:04.817 "identify_ctrlr": false 00:20:04.817 }, 00:20:04.817 "dhchap_digests": [ 00:20:04.817 "sha256", 00:20:04.817 "sha384", 00:20:04.817 "sha512" 00:20:04.817 ], 00:20:04.817 "dhchap_dhgroups": [ 00:20:04.817 "null", 00:20:04.817 "ffdhe2048", 00:20:04.817 "ffdhe3072", 00:20:04.817 "ffdhe4096", 00:20:04.817 "ffdhe6144", 00:20:04.817 "ffdhe8192" 00:20:04.817 ] 00:20:04.817 } 00:20:04.817 }, 00:20:04.817 { 00:20:04.817 "method": "nvmf_set_max_subsystems", 00:20:04.817 "params": { 00:20:04.817 "max_subsystems": 1024 00:20:04.817 } 00:20:04.817 }, 00:20:04.817 { 00:20:04.817 "method": "nvmf_set_crdt", 00:20:04.817 "params": { 00:20:04.817 "crdt1": 0, 00:20:04.817 "crdt2": 0, 00:20:04.817 "crdt3": 0 00:20:04.817 } 00:20:04.817 }, 00:20:04.817 { 00:20:04.817 "method": "nvmf_create_transport", 00:20:04.817 "params": { 00:20:04.817 "trtype": "TCP", 00:20:04.817 "max_queue_depth": 128, 00:20:04.818 "max_io_qpairs_per_ctrlr": 127, 00:20:04.818 "in_capsule_data_size": 4096, 00:20:04.818 "max_io_size": 131072, 00:20:04.818 "io_unit_size": 131072, 00:20:04.818 "max_aq_depth": 128, 00:20:04.818 "num_shared_buffers": 511, 00:20:04.818 "buf_cache_size": 4294967295, 00:20:04.818 "dif_insert_or_strip": false, 00:20:04.818 "zcopy": false, 00:20:04.818 "c2h_success": false, 00:20:04.818 "sock_priority": 0, 00:20:04.818 "abort_timeout_sec": 1, 00:20:04.818 "ack_timeout": 0, 00:20:04.818 "data_wr_pool_size": 0 00:20:04.818 } 00:20:04.818 }, 00:20:04.818 { 00:20:04.818 "method": "nvmf_create_subsystem", 00:20:04.818 "params": { 00:20:04.818 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.818 "allow_any_host": false, 00:20:04.818 "serial_number": "00000000000000000000", 00:20:04.818 "model_number": "SPDK bdev Controller", 00:20:04.818 "max_namespaces": 32, 00:20:04.818 "min_cntlid": 1, 00:20:04.818 "max_cntlid": 65519, 00:20:04.818 "ana_reporting": false 00:20:04.818 } 00:20:04.818 }, 00:20:04.818 { 00:20:04.818 "method": "nvmf_subsystem_add_host", 00:20:04.818 "params": { 00:20:04.818 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.818 "host": "nqn.2016-06.io.spdk:host1", 00:20:04.818 "psk": "key0" 00:20:04.818 } 00:20:04.818 }, 00:20:04.818 { 00:20:04.818 "method": "nvmf_subsystem_add_ns", 00:20:04.818 "params": { 00:20:04.818 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.818 "namespace": { 00:20:04.818 "nsid": 1, 00:20:04.818 "bdev_name": "malloc0", 00:20:04.818 "nguid": "8923AB348D0A4B548D85D09C55A124BF", 00:20:04.818 "uuid": "8923ab34-8d0a-4b54-8d85-d09c55a124bf", 00:20:04.818 "no_auto_visible": false 00:20:04.818 } 00:20:04.818 } 00:20:04.818 }, 00:20:04.818 { 00:20:04.818 "method": "nvmf_subsystem_add_listener", 00:20:04.818 "params": { 00:20:04.818 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.818 "listen_address": { 00:20:04.818 "trtype": "TCP", 00:20:04.818 "adrfam": "IPv4", 00:20:04.818 "traddr": "10.0.0.2", 00:20:04.818 "trsvcid": "4420" 00:20:04.818 }, 00:20:04.818 "secure_channel": false, 00:20:04.818 "sock_impl": "ssl" 00:20:04.818 } 00:20:04.818 } 00:20:04.818 ] 00:20:04.818 } 00:20:04.818 ] 00:20:04.818 }' 00:20:04.818 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:04.818 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:04.818 "subsystems": [ 00:20:04.818 { 00:20:04.818 "subsystem": "keyring", 00:20:04.818 "config": [ 00:20:04.818 { 00:20:04.818 "method": "keyring_file_add_key", 00:20:04.818 "params": { 00:20:04.818 "name": "key0", 00:20:04.818 "path": "/tmp/tmp.M7Tzoie8F3" 00:20:04.818 } 00:20:04.818 } 00:20:04.818 ] 00:20:04.818 }, 00:20:04.818 { 00:20:04.818 "subsystem": "iobuf", 00:20:04.818 "config": [ 00:20:04.818 { 00:20:04.818 "method": "iobuf_set_options", 00:20:04.818 "params": { 00:20:04.818 "small_pool_count": 8192, 00:20:04.818 "large_pool_count": 1024, 00:20:04.818 "small_bufsize": 8192, 00:20:04.818 "large_bufsize": 135168, 00:20:04.818 "enable_numa": false 00:20:04.818 } 00:20:04.818 } 00:20:04.818 ] 00:20:04.818 }, 00:20:04.818 { 00:20:04.818 "subsystem": "sock", 00:20:04.818 "config": [ 00:20:04.818 { 00:20:04.818 "method": "sock_set_default_impl", 00:20:04.818 "params": { 00:20:04.818 "impl_name": "posix" 00:20:04.818 } 00:20:04.818 }, 00:20:04.818 { 00:20:04.818 "method": "sock_impl_set_options", 00:20:04.818 "params": { 00:20:04.818 "impl_name": "ssl", 00:20:04.818 "recv_buf_size": 4096, 00:20:04.818 "send_buf_size": 4096, 00:20:04.818 "enable_recv_pipe": true, 00:20:04.818 "enable_quickack": false, 00:20:04.818 "enable_placement_id": 0, 00:20:04.818 "enable_zerocopy_send_server": true, 00:20:04.818 "enable_zerocopy_send_client": false, 00:20:04.818 "zerocopy_threshold": 0, 00:20:04.818 "tls_version": 0, 00:20:04.818 "enable_ktls": false 00:20:04.818 } 00:20:04.818 }, 00:20:04.818 { 00:20:04.818 "method": "sock_impl_set_options", 00:20:04.818 "params": { 00:20:04.818 "impl_name": "posix", 00:20:04.818 "recv_buf_size": 2097152, 00:20:04.818 "send_buf_size": 2097152, 00:20:04.818 "enable_recv_pipe": true, 00:20:04.818 "enable_quickack": false, 00:20:04.818 "enable_placement_id": 0, 00:20:04.818 "enable_zerocopy_send_server": true, 00:20:04.818 "enable_zerocopy_send_client": false, 00:20:04.818 "zerocopy_threshold": 0, 00:20:04.818 "tls_version": 0, 00:20:04.818 "enable_ktls": false 00:20:04.818 } 00:20:04.818 } 00:20:04.818 ] 00:20:04.818 }, 00:20:04.818 { 00:20:04.818 "subsystem": "vmd", 00:20:04.818 "config": [] 00:20:04.818 }, 00:20:04.818 { 00:20:04.818 "subsystem": "accel", 00:20:04.818 "config": [ 00:20:04.818 { 00:20:04.818 "method": "accel_set_options", 00:20:04.818 "params": { 00:20:04.818 "small_cache_size": 128, 00:20:04.818 "large_cache_size": 16, 00:20:04.818 "task_count": 2048, 00:20:04.818 "sequence_count": 2048, 00:20:04.818 "buf_count": 2048 00:20:04.818 } 00:20:04.818 } 00:20:04.818 ] 00:20:04.818 }, 00:20:04.818 { 00:20:04.818 "subsystem": "bdev", 00:20:04.818 "config": [ 00:20:04.818 { 00:20:04.818 "method": "bdev_set_options", 00:20:04.818 "params": { 00:20:04.818 "bdev_io_pool_size": 65535, 00:20:04.818 "bdev_io_cache_size": 256, 00:20:04.818 "bdev_auto_examine": true, 00:20:04.818 "iobuf_small_cache_size": 128, 00:20:04.818 "iobuf_large_cache_size": 16 00:20:04.818 } 00:20:04.818 }, 00:20:04.818 { 00:20:04.818 "method": "bdev_raid_set_options", 00:20:04.818 "params": { 00:20:04.818 "process_window_size_kb": 1024, 00:20:04.818 "process_max_bandwidth_mb_sec": 0 00:20:04.818 } 00:20:04.818 }, 00:20:04.818 { 00:20:04.818 "method": "bdev_iscsi_set_options", 00:20:04.818 "params": { 00:20:04.818 "timeout_sec": 30 00:20:04.818 } 00:20:04.818 }, 00:20:04.818 { 00:20:04.818 "method": "bdev_nvme_set_options", 00:20:04.818 "params": { 00:20:04.818 "action_on_timeout": "none", 00:20:04.818 "timeout_us": 0, 00:20:04.818 "timeout_admin_us": 0, 00:20:04.818 "keep_alive_timeout_ms": 10000, 00:20:04.818 "arbitration_burst": 0, 00:20:04.818 "low_priority_weight": 0, 00:20:04.818 "medium_priority_weight": 0, 00:20:04.818 "high_priority_weight": 0, 00:20:04.818 "nvme_adminq_poll_period_us": 10000, 00:20:04.818 "nvme_ioq_poll_period_us": 0, 00:20:04.818 "io_queue_requests": 512, 00:20:04.818 "delay_cmd_submit": true, 00:20:04.818 "transport_retry_count": 4, 00:20:04.818 "bdev_retry_count": 3, 00:20:04.818 "transport_ack_timeout": 0, 00:20:04.818 "ctrlr_loss_timeout_sec": 0, 00:20:04.818 "reconnect_delay_sec": 0, 00:20:04.818 "fast_io_fail_timeout_sec": 0, 00:20:04.818 "disable_auto_failback": false, 00:20:04.818 "generate_uuids": false, 00:20:04.818 "transport_tos": 0, 00:20:04.818 "nvme_error_stat": false, 00:20:04.818 "rdma_srq_size": 0, 00:20:04.818 "io_path_stat": false, 00:20:04.818 "allow_accel_sequence": false, 00:20:04.818 "rdma_max_cq_size": 0, 00:20:04.818 "rdma_cm_event_timeout_ms": 0, 00:20:04.818 "dhchap_digests": [ 00:20:04.818 "sha256", 00:20:04.818 "sha384", 00:20:04.818 "sha512" 00:20:04.818 ], 00:20:04.818 "dhchap_dhgroups": [ 00:20:04.818 "null", 00:20:04.818 "ffdhe2048", 00:20:04.818 "ffdhe3072", 00:20:04.818 "ffdhe4096", 00:20:04.818 "ffdhe6144", 00:20:04.818 "ffdhe8192" 00:20:04.818 ] 00:20:04.818 } 00:20:04.818 }, 00:20:04.818 { 00:20:04.818 "method": "bdev_nvme_attach_controller", 00:20:04.818 "params": { 00:20:04.818 "name": "nvme0", 00:20:04.818 "trtype": "TCP", 00:20:04.818 "adrfam": "IPv4", 00:20:04.818 "traddr": "10.0.0.2", 00:20:04.818 "trsvcid": "4420", 00:20:04.818 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.818 "prchk_reftag": false, 00:20:04.818 "prchk_guard": false, 00:20:04.818 "ctrlr_loss_timeout_sec": 0, 00:20:04.818 "reconnect_delay_sec": 0, 00:20:04.818 "fast_io_fail_timeout_sec": 0, 00:20:04.818 "psk": "key0", 00:20:04.818 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:04.818 "hdgst": false, 00:20:04.818 "ddgst": false, 00:20:04.818 "multipath": "multipath" 00:20:04.819 } 00:20:04.819 }, 00:20:04.819 { 00:20:04.819 "method": "bdev_nvme_set_hotplug", 00:20:04.819 "params": { 00:20:04.819 "period_us": 100000, 00:20:04.819 "enable": false 00:20:04.819 } 00:20:04.819 }, 00:20:04.819 { 00:20:04.819 "method": "bdev_enable_histogram", 00:20:04.819 "params": { 00:20:04.819 "name": "nvme0n1", 00:20:04.819 "enable": true 00:20:04.819 } 00:20:04.819 }, 00:20:04.819 { 00:20:04.819 "method": "bdev_wait_for_examine" 00:20:04.819 } 00:20:04.819 ] 00:20:04.819 }, 00:20:04.819 { 00:20:04.819 "subsystem": "nbd", 00:20:04.819 "config": [] 00:20:04.819 } 00:20:04.819 ] 00:20:04.819 }' 00:20:04.819 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 69377 00:20:04.819 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 69377 ']' 00:20:04.819 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 69377 00:20:04.819 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:04.819 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.819 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69377 00:20:05.079 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:05.079 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:05.079 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69377' 00:20:05.079 killing process with pid 69377 00:20:05.079 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 69377 00:20:05.080 Received shutdown signal, test time was about 1.000000 seconds 00:20:05.080 00:20:05.080 Latency(us) 00:20:05.080 [2024-12-09T10:55:12.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.080 [2024-12-09T10:55:12.966Z] =================================================================================================================== 00:20:05.080 [2024-12-09T10:55:12.966Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:05.080 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 69377 00:20:05.080 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 69293 00:20:05.080 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 69293 ']' 00:20:05.080 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 69293 00:20:05.080 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:05.080 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.080 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69293 00:20:05.080 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:05.080 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:05.080 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69293' 00:20:05.080 killing process with pid 69293 00:20:05.080 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 69293 00:20:05.080 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 69293 00:20:05.341 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:05.341 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:05.341 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:05.341 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:05.341 "subsystems": [ 00:20:05.341 { 00:20:05.341 "subsystem": "keyring", 00:20:05.341 "config": [ 00:20:05.341 { 00:20:05.341 "method": "keyring_file_add_key", 00:20:05.341 "params": { 00:20:05.341 "name": "key0", 00:20:05.341 "path": "/tmp/tmp.M7Tzoie8F3" 00:20:05.341 } 00:20:05.341 } 00:20:05.341 ] 00:20:05.341 }, 00:20:05.341 { 00:20:05.341 "subsystem": "iobuf", 00:20:05.341 "config": [ 00:20:05.341 { 00:20:05.341 "method": "iobuf_set_options", 00:20:05.341 "params": { 00:20:05.341 "small_pool_count": 8192, 00:20:05.341 "large_pool_count": 1024, 00:20:05.341 "small_bufsize": 8192, 00:20:05.341 "large_bufsize": 135168, 00:20:05.341 "enable_numa": false 00:20:05.341 } 00:20:05.341 } 00:20:05.341 ] 00:20:05.341 }, 00:20:05.341 { 00:20:05.341 "subsystem": "sock", 00:20:05.341 "config": [ 00:20:05.341 { 00:20:05.341 "method": "sock_set_default_impl", 00:20:05.341 "params": { 00:20:05.341 "impl_name": "posix" 00:20:05.341 } 00:20:05.341 }, 00:20:05.341 { 00:20:05.341 "method": "sock_impl_set_options", 00:20:05.341 "params": { 00:20:05.341 "impl_name": "ssl", 00:20:05.341 "recv_buf_size": 4096, 00:20:05.341 "send_buf_size": 4096, 00:20:05.341 "enable_recv_pipe": true, 00:20:05.341 "enable_quickack": false, 00:20:05.341 "enable_placement_id": 0, 00:20:05.341 "enable_zerocopy_send_server": true, 00:20:05.341 "enable_zerocopy_send_client": false, 00:20:05.341 "zerocopy_threshold": 0, 00:20:05.341 "tls_version": 0, 00:20:05.341 "enable_ktls": false 00:20:05.341 } 00:20:05.341 }, 00:20:05.341 { 00:20:05.341 "method": "sock_impl_set_options", 00:20:05.341 "params": { 00:20:05.341 "impl_name": "posix", 00:20:05.341 "recv_buf_size": 2097152, 00:20:05.341 "send_buf_size": 2097152, 00:20:05.341 "enable_recv_pipe": true, 00:20:05.341 "enable_quickack": false, 00:20:05.341 "enable_placement_id": 0, 00:20:05.341 "enable_zerocopy_send_server": true, 00:20:05.341 "enable_zerocopy_send_client": false, 00:20:05.341 "zerocopy_threshold": 0, 00:20:05.341 "tls_version": 0, 00:20:05.341 "enable_ktls": false 00:20:05.341 } 00:20:05.341 } 00:20:05.341 ] 00:20:05.341 }, 00:20:05.341 { 00:20:05.341 "subsystem": "vmd", 00:20:05.341 "config": [] 00:20:05.341 }, 00:20:05.341 { 00:20:05.341 "subsystem": "accel", 00:20:05.341 "config": [ 00:20:05.341 { 00:20:05.341 "method": "accel_set_options", 00:20:05.341 "params": { 00:20:05.341 "small_cache_size": 128, 00:20:05.341 "large_cache_size": 16, 00:20:05.341 "task_count": 2048, 00:20:05.341 "sequence_count": 2048, 00:20:05.341 "buf_count": 2048 00:20:05.341 } 00:20:05.341 } 00:20:05.341 ] 00:20:05.341 }, 00:20:05.341 { 00:20:05.341 "subsystem": "bdev", 00:20:05.341 "config": [ 00:20:05.341 { 00:20:05.341 "method": "bdev_set_options", 00:20:05.341 "params": { 00:20:05.341 "bdev_io_pool_size": 65535, 00:20:05.341 "bdev_io_cache_size": 256, 00:20:05.341 "bdev_auto_examine": true, 00:20:05.341 "iobuf_small_cache_size": 128, 00:20:05.341 "iobuf_large_cache_size": 16 00:20:05.341 } 00:20:05.341 }, 00:20:05.341 { 00:20:05.341 "method": "bdev_raid_set_options", 00:20:05.341 "params": { 00:20:05.341 "process_window_size_kb": 1024, 00:20:05.341 "process_max_bandwidth_mb_sec": 0 00:20:05.341 } 00:20:05.341 }, 00:20:05.341 { 00:20:05.341 "method": "bdev_iscsi_set_options", 00:20:05.341 "params": { 00:20:05.341 "timeout_sec": 30 00:20:05.341 } 00:20:05.341 }, 00:20:05.341 { 00:20:05.341 "method": "bdev_nvme_set_options", 00:20:05.341 "params": { 00:20:05.341 "action_on_timeout": "none", 00:20:05.341 "timeout_us": 0, 00:20:05.341 "timeout_admin_us": 0, 00:20:05.341 "keep_alive_timeout_ms": 10000, 00:20:05.341 "arbitration_burst": 0, 00:20:05.341 "low_priority_weight": 0, 00:20:05.341 "medium_priority_weight": 0, 00:20:05.341 "high_priority_weight": 0, 00:20:05.341 "nvme_adminq_poll_period_us": 10000, 00:20:05.341 "nvme_ioq_poll_period_us": 0, 00:20:05.341 "io_queue_requests": 0, 00:20:05.341 "delay_cmd_submit": true, 00:20:05.341 "transport_retry_count": 4, 00:20:05.341 "bdev_retry_count": 3, 00:20:05.341 "transport_ack_timeout": 0, 00:20:05.341 "ctrlr_loss_timeout_sec": 0, 00:20:05.341 "reconnect_delay_sec": 0, 00:20:05.341 "fast_io_fail_timeout_sec": 0, 00:20:05.341 "disable_auto_failback": false, 00:20:05.342 "generate_uuids": false, 00:20:05.342 "transport_tos": 0, 00:20:05.342 "nvme_error_stat": false, 00:20:05.342 "rdma_srq_size": 0, 00:20:05.342 "io_path_stat": false, 00:20:05.342 "allow_accel_sequence": false, 00:20:05.342 "rdma_max_cq_size": 0, 00:20:05.342 "rdma_cm_event_timeout_ms": 0, 00:20:05.342 "dhchap_digests": [ 00:20:05.342 "sha256", 00:20:05.342 "sha384", 00:20:05.342 "sha512" 00:20:05.342 ], 00:20:05.342 "dhchap_dhgroups": [ 00:20:05.342 "null", 00:20:05.342 "ffdhe2048", 00:20:05.342 "ffdhe3072", 00:20:05.342 "ffdhe4096", 00:20:05.342 "ffdhe6144", 00:20:05.342 "ffdhe8192" 00:20:05.342 ] 00:20:05.342 } 00:20:05.342 }, 00:20:05.342 { 00:20:05.342 "method": "bdev_nvme_set_hotplug", 00:20:05.342 "params": { 00:20:05.342 "period_us": 100000, 00:20:05.342 "enable": false 00:20:05.342 } 00:20:05.342 }, 00:20:05.342 { 00:20:05.342 "method": "bdev_malloc_create", 00:20:05.342 "params": { 00:20:05.342 "name": "malloc0", 00:20:05.342 "num_blocks": 8192, 00:20:05.342 "block_size": 4096, 00:20:05.342 "physical_block_size": 4096, 00:20:05.342 "uuid": "8923ab34-8d0a-4b54-8d85-d09c55a124bf", 00:20:05.342 "optimal_io_boundary": 0, 00:20:05.342 "md_size": 0, 00:20:05.342 "dif_type": 0, 00:20:05.342 "dif_is_head_of_md": false, 00:20:05.342 "dif_pi_format": 0 00:20:05.342 } 00:20:05.342 }, 00:20:05.342 { 00:20:05.342 "method": "bdev_wait_for_examine" 00:20:05.342 } 00:20:05.342 ] 00:20:05.342 }, 00:20:05.342 { 00:20:05.342 "subsystem": "nbd", 00:20:05.342 "config": [] 00:20:05.342 }, 00:20:05.342 { 00:20:05.342 "subsystem": "scheduler", 00:20:05.342 "config": [ 00:20:05.342 { 00:20:05.342 "method": "framework_set_scheduler", 00:20:05.342 "params": { 00:20:05.342 "name": "static" 00:20:05.342 } 00:20:05.342 } 00:20:05.342 ] 00:20:05.342 }, 00:20:05.342 { 00:20:05.342 "subsystem": "nvmf", 00:20:05.342 "config": [ 00:20:05.342 { 00:20:05.342 "method": "nvmf_set_config", 00:20:05.342 "params": { 00:20:05.342 "discovery_filter": "match_any", 00:20:05.342 "admin_cmd_passthru": { 00:20:05.342 "identify_ctrlr": false 00:20:05.342 }, 00:20:05.342 "dhchap_digests": [ 00:20:05.342 "sha256", 00:20:05.342 "sha384", 00:20:05.342 "sha512" 00:20:05.342 ], 00:20:05.342 "dhchap_dhgroups": [ 00:20:05.342 "null", 00:20:05.342 "ffdhe2048", 00:20:05.342 "ffdhe3072", 00:20:05.342 "ffdhe4096", 00:20:05.342 "ffdhe6144", 00:20:05.342 "ffdhe8192" 00:20:05.342 ] 00:20:05.342 } 00:20:05.342 }, 00:20:05.342 { 00:20:05.342 "method": "nvmf_set_max_subsystems", 00:20:05.342 "params": { 00:20:05.342 "max_subsystems": 1024 00:20:05.342 } 00:20:05.342 }, 00:20:05.342 { 00:20:05.342 "method": "nvmf_set_crdt", 00:20:05.342 "params": { 00:20:05.342 "crdt1": 0, 00:20:05.342 "crdt2": 0, 00:20:05.342 "crdt3": 0 00:20:05.342 } 00:20:05.342 }, 00:20:05.342 { 00:20:05.342 "method": "nvmf_create_transport", 00:20:05.342 "params": { 00:20:05.342 "trtype": "TCP", 00:20:05.342 "max_queue_depth": 128, 00:20:05.342 "max_io_qpairs_per_ctrlr": 127, 00:20:05.342 "in_capsule_data_size": 4096, 00:20:05.342 "max_io_size": 131072, 00:20:05.342 "io_unit_size": 131072, 00:20:05.342 "max_aq_depth": 128, 00:20:05.342 "num_shared_buffers": 511, 00:20:05.342 "buf_cache_size": 4294967295, 00:20:05.342 "dif_insert_or_strip": false, 00:20:05.342 "zcopy": false, 00:20:05.342 "c2h_success": false, 00:20:05.342 "sock_priority": 0, 00:20:05.342 "abort_timeout_sec": 1, 00:20:05.342 "ack_timeout": 0, 00:20:05.342 "data_wr_pool_size": 0 00:20:05.342 } 00:20:05.342 }, 00:20:05.342 { 00:20:05.342 "method": "nvmf_create_subsystem", 00:20:05.342 "params": { 00:20:05.342 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.342 "allow_any_host": false, 00:20:05.342 "serial_number": "00000000000000000000", 00:20:05.342 "model_number": "SPDK bdev Controller", 00:20:05.342 "max_namespaces": 32, 00:20:05.342 "min_cntlid": 1, 00:20:05.342 "max_cntlid": 65519, 00:20:05.342 "ana_reporting": false 00:20:05.342 } 00:20:05.342 }, 00:20:05.342 { 00:20:05.342 "method": "nvmf_subsystem_add_host", 00:20:05.342 "params": { 00:20:05.342 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.342 "host": "nqn.2016-06.io.spdk:host1", 00:20:05.342 "psk": "key0" 00:20:05.342 } 00:20:05.342 }, 00:20:05.342 { 00:20:05.342 "method": "nvmf_subsystem_add_ns", 00:20:05.342 "params": { 00:20:05.342 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.342 "namespace": { 00:20:05.342 "nsid": 1, 00:20:05.342 "bdev_name": "malloc0", 00:20:05.342 "nguid": "8923AB348D0A4B548D85D09C55A124BF", 00:20:05.342 "uuid": "8923ab34-8d0a-4b54-8d85-d09c55a124bf", 00:20:05.342 "no_auto_visible": false 00:20:05.342 } 00:20:05.342 } 00:20:05.342 }, 00:20:05.342 { 00:20:05.342 "method": "nvmf_subsystem_add_listener", 00:20:05.342 "params": { 00:20:05.342 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.342 "listen_address": { 00:20:05.342 "trtype": "TCP", 00:20:05.342 "adrfam": "IPv4", 00:20:05.342 "traddr": "10.0.0.2", 00:20:05.342 "trsvcid": "4420" 00:20:05.342 }, 00:20:05.342 "secure_channel": false, 00:20:05.342 "sock_impl": "ssl" 00:20:05.342 } 00:20:05.342 } 00:20:05.342 ] 00:20:05.342 } 00:20:05.342 ] 00:20:05.342 }' 00:20:05.342 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.342 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=70060 00:20:05.342 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 70060 00:20:05.342 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:05.342 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70060 ']' 00:20:05.342 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.342 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.342 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.342 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.342 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.342 [2024-12-09 11:55:13.070010] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:20:05.342 [2024-12-09 11:55:13.070064] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.342 [2024-12-09 11:55:13.162153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.342 [2024-12-09 11:55:13.192311] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.342 [2024-12-09 11:55:13.192345] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.342 [2024-12-09 11:55:13.192351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:05.342 [2024-12-09 11:55:13.192356] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:05.342 [2024-12-09 11:55:13.192360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.342 [2024-12-09 11:55:13.192872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.603 [2024-12-09 11:55:13.386883] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.603 [2024-12-09 11:55:13.418908] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:05.603 [2024-12-09 11:55:13.419094] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.173 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.173 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:06.173 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:06.173 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:06.173 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.173 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.173 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=70231 00:20:06.173 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 70231 /var/tmp/bdevperf.sock 00:20:06.173 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70231 ']' 00:20:06.173 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.173 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.173 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.173 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:06.173 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.173 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.173 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:06.173 "subsystems": [ 00:20:06.173 { 00:20:06.173 "subsystem": "keyring", 00:20:06.173 "config": [ 00:20:06.173 { 00:20:06.173 "method": "keyring_file_add_key", 00:20:06.173 "params": { 00:20:06.173 "name": "key0", 00:20:06.173 "path": "/tmp/tmp.M7Tzoie8F3" 00:20:06.173 } 00:20:06.173 } 00:20:06.174 ] 00:20:06.174 }, 00:20:06.174 { 00:20:06.174 "subsystem": "iobuf", 00:20:06.174 "config": [ 00:20:06.174 { 00:20:06.174 "method": "iobuf_set_options", 00:20:06.174 "params": { 00:20:06.174 "small_pool_count": 8192, 00:20:06.174 "large_pool_count": 1024, 00:20:06.174 "small_bufsize": 8192, 00:20:06.174 "large_bufsize": 135168, 00:20:06.174 "enable_numa": false 00:20:06.174 } 00:20:06.174 } 00:20:06.174 ] 00:20:06.174 }, 00:20:06.174 { 00:20:06.174 "subsystem": "sock", 00:20:06.174 "config": [ 00:20:06.174 { 00:20:06.174 "method": "sock_set_default_impl", 00:20:06.174 "params": { 00:20:06.174 "impl_name": "posix" 00:20:06.174 } 00:20:06.174 }, 00:20:06.174 { 00:20:06.174 "method": "sock_impl_set_options", 00:20:06.174 "params": { 00:20:06.174 "impl_name": "ssl", 00:20:06.174 "recv_buf_size": 4096, 00:20:06.174 "send_buf_size": 4096, 00:20:06.174 "enable_recv_pipe": true, 00:20:06.174 "enable_quickack": false, 00:20:06.174 "enable_placement_id": 0, 00:20:06.174 "enable_zerocopy_send_server": true, 00:20:06.174 "enable_zerocopy_send_client": false, 00:20:06.174 "zerocopy_threshold": 0, 00:20:06.174 "tls_version": 0, 00:20:06.174 "enable_ktls": false 00:20:06.174 } 00:20:06.174 }, 00:20:06.174 { 00:20:06.174 "method": "sock_impl_set_options", 00:20:06.174 "params": { 00:20:06.174 "impl_name": "posix", 00:20:06.174 "recv_buf_size": 2097152, 00:20:06.174 "send_buf_size": 2097152, 00:20:06.174 "enable_recv_pipe": true, 00:20:06.174 "enable_quickack": false, 00:20:06.174 "enable_placement_id": 0, 00:20:06.174 "enable_zerocopy_send_server": true, 00:20:06.174 "enable_zerocopy_send_client": false, 00:20:06.174 "zerocopy_threshold": 0, 00:20:06.174 "tls_version": 0, 00:20:06.174 "enable_ktls": false 00:20:06.174 } 00:20:06.174 } 00:20:06.174 ] 00:20:06.174 }, 00:20:06.174 { 00:20:06.174 "subsystem": "vmd", 00:20:06.174 "config": [] 00:20:06.174 }, 00:20:06.174 { 00:20:06.174 "subsystem": "accel", 00:20:06.174 "config": [ 00:20:06.174 { 00:20:06.174 "method": "accel_set_options", 00:20:06.174 "params": { 00:20:06.174 "small_cache_size": 128, 00:20:06.174 "large_cache_size": 16, 00:20:06.174 "task_count": 2048, 00:20:06.174 "sequence_count": 2048, 00:20:06.174 "buf_count": 2048 00:20:06.174 } 00:20:06.174 } 00:20:06.174 ] 00:20:06.174 }, 00:20:06.174 { 00:20:06.174 "subsystem": "bdev", 00:20:06.174 "config": [ 00:20:06.174 { 00:20:06.174 "method": "bdev_set_options", 00:20:06.174 "params": { 00:20:06.174 "bdev_io_pool_size": 65535, 00:20:06.174 "bdev_io_cache_size": 256, 00:20:06.174 "bdev_auto_examine": true, 00:20:06.174 "iobuf_small_cache_size": 128, 00:20:06.174 "iobuf_large_cache_size": 16 00:20:06.174 } 00:20:06.174 }, 00:20:06.174 { 00:20:06.174 "method": "bdev_raid_set_options", 00:20:06.174 "params": { 00:20:06.174 "process_window_size_kb": 1024, 00:20:06.174 "process_max_bandwidth_mb_sec": 0 00:20:06.174 } 00:20:06.174 }, 00:20:06.174 { 00:20:06.174 "method": "bdev_iscsi_set_options", 00:20:06.174 "params": { 00:20:06.174 "timeout_sec": 30 00:20:06.174 } 00:20:06.174 }, 00:20:06.174 { 00:20:06.174 "method": "bdev_nvme_set_options", 00:20:06.174 "params": { 00:20:06.174 "action_on_timeout": "none", 00:20:06.174 "timeout_us": 0, 00:20:06.174 "timeout_admin_us": 0, 00:20:06.174 "keep_alive_timeout_ms": 10000, 00:20:06.174 "arbitration_burst": 0, 00:20:06.174 "low_priority_weight": 0, 00:20:06.174 "medium_priority_weight": 0, 00:20:06.174 "high_priority_weight": 0, 00:20:06.174 "nvme_adminq_poll_period_us": 10000, 00:20:06.174 "nvme_ioq_poll_period_us": 0, 00:20:06.174 "io_queue_requests": 512, 00:20:06.174 "delay_cmd_submit": true, 00:20:06.174 "transport_retry_count": 4, 00:20:06.174 "bdev_retry_count": 3, 00:20:06.174 "transport_ack_timeout": 0, 00:20:06.174 "ctrlr_loss_timeout_sec": 0, 00:20:06.174 "reconnect_delay_sec": 0, 00:20:06.174 "fast_io_fail_timeout_sec": 0, 00:20:06.174 "disable_auto_failback": false, 00:20:06.174 "generate_uuids": false, 00:20:06.174 "transport_tos": 0, 00:20:06.174 "nvme_error_stat": false, 00:20:06.174 "rdma_srq_size": 0, 00:20:06.174 "io_path_stat": false, 00:20:06.174 "allow_accel_sequence": false, 00:20:06.174 "rdma_max_cq_size": 0, 00:20:06.174 "rdma_cm_event_timeout_ms": 0, 00:20:06.174 "dhchap_digests": [ 00:20:06.174 "sha256", 00:20:06.174 "sha384", 00:20:06.174 "sha512" 00:20:06.174 ], 00:20:06.174 "dhchap_dhgroups": [ 00:20:06.174 "null", 00:20:06.174 "ffdhe2048", 00:20:06.174 "ffdhe3072", 00:20:06.174 "ffdhe4096", 00:20:06.174 "ffdhe6144", 00:20:06.174 "ffdhe8192" 00:20:06.174 ] 00:20:06.174 } 00:20:06.174 }, 00:20:06.174 { 00:20:06.174 "method": "bdev_nvme_attach_controller", 00:20:06.174 "params": { 00:20:06.174 "name": "nvme0", 00:20:06.174 "trtype": "TCP", 00:20:06.174 "adrfam": "IPv4", 00:20:06.174 "traddr": "10.0.0.2", 00:20:06.174 "trsvcid": "4420", 00:20:06.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.174 "prchk_reftag": false, 00:20:06.174 "prchk_guard": false, 00:20:06.174 "ctrlr_loss_timeout_sec": 0, 00:20:06.174 "reconnect_delay_sec": 0, 00:20:06.174 "fast_io_fail_timeout_sec": 0, 00:20:06.174 "psk": "key0", 00:20:06.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:06.174 "hdgst": false, 00:20:06.174 "ddgst": false, 00:20:06.174 "multipath": "multipath" 00:20:06.174 } 00:20:06.174 }, 00:20:06.174 { 00:20:06.174 "method": "bdev_nvme_set_hotplug", 00:20:06.174 "params": { 00:20:06.174 "period_us": 100000, 00:20:06.174 "enable": false 00:20:06.174 } 00:20:06.174 }, 00:20:06.174 { 00:20:06.174 "method": "bdev_enable_histogram", 00:20:06.174 "params": { 00:20:06.174 "name": "nvme0n1", 00:20:06.174 "enable": true 00:20:06.174 } 00:20:06.174 }, 00:20:06.174 { 00:20:06.174 "method": "bdev_wait_for_examine" 00:20:06.174 } 00:20:06.174 ] 00:20:06.174 }, 00:20:06.174 { 00:20:06.174 "subsystem": "nbd", 00:20:06.174 "config": [] 00:20:06.174 } 00:20:06.174 ] 00:20:06.174 }' 00:20:06.174 [2024-12-09 11:55:13.957156] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:20:06.174 [2024-12-09 11:55:13.957207] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70231 ] 00:20:06.174 [2024-12-09 11:55:14.041812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.434 [2024-12-09 11:55:14.071512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.434 [2024-12-09 11:55:14.207728] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:07.005 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.005 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:07.005 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:07.005 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:07.264 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.264 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:07.264 Running I/O for 1 seconds... 00:20:08.204 4858.00 IOPS, 18.98 MiB/s 00:20:08.204 Latency(us) 00:20:08.204 [2024-12-09T10:55:16.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.205 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:08.205 Verification LBA range: start 0x0 length 0x2000 00:20:08.205 nvme0n1 : 1.01 4918.84 19.21 0.00 0.00 25861.84 5079.04 22500.69 00:20:08.205 [2024-12-09T10:55:16.091Z] =================================================================================================================== 00:20:08.205 [2024-12-09T10:55:16.091Z] Total : 4918.84 19.21 0.00 0.00 25861.84 5079.04 22500.69 00:20:08.205 { 00:20:08.205 "results": [ 00:20:08.205 { 00:20:08.205 "job": "nvme0n1", 00:20:08.205 "core_mask": "0x2", 00:20:08.205 "workload": "verify", 00:20:08.205 "status": "finished", 00:20:08.205 "verify_range": { 00:20:08.205 "start": 0, 00:20:08.205 "length": 8192 00:20:08.205 }, 00:20:08.205 "queue_depth": 128, 00:20:08.205 "io_size": 4096, 00:20:08.205 "runtime": 1.013654, 00:20:08.205 "iops": 4918.838183443266, 00:20:08.205 "mibps": 19.214211654075257, 00:20:08.205 "io_failed": 0, 00:20:08.205 "io_timeout": 0, 00:20:08.205 "avg_latency_us": 25861.83675892499, 00:20:08.205 "min_latency_us": 5079.04, 00:20:08.205 "max_latency_us": 22500.693333333333 00:20:08.205 } 00:20:08.205 ], 00:20:08.205 "core_count": 1 00:20:08.205 } 00:20:08.205 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:08.205 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:08.205 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:08.205 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:08.205 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:08.205 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:08.205 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:08.205 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:08.205 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:08.205 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:08.205 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:08.205 nvmf_trace.0 00:20:08.471 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:08.471 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 70231 00:20:08.471 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70231 ']' 00:20:08.471 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70231 00:20:08.471 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:08.471 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:08.471 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70231 00:20:08.471 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:08.471 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:08.471 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70231' 00:20:08.471 killing process with pid 70231 00:20:08.471 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70231 00:20:08.471 Received shutdown signal, test time was about 1.000000 seconds 00:20:08.471 00:20:08.471 Latency(us) 00:20:08.471 [2024-12-09T10:55:16.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.471 [2024-12-09T10:55:16.357Z] =================================================================================================================== 00:20:08.471 [2024-12-09T10:55:16.357Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:08.471 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70231 00:20:08.471 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:08.471 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:08.471 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # sync 00:20:08.471 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:20:08.471 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # set +e 00:20:08.471 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # for i in {1..20} 00:20:08.471 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:20:08.471 rmmod nvme_tcp 00:20:08.471 rmmod nvme_fabrics 00:20:08.471 rmmod nvme_keyring 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # set -e 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@130 -- # return 0 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 70060 ']' 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 70060 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70060 ']' 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70060 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70060 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70060' 00:20:08.776 killing process with pid 70060 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70060 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70060 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # iptr 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # remove_spdk_ns 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:08.776 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.820 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:20:10.820 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.yjgAodulAk /tmp/tmp.xtbNTKtn7s /tmp/tmp.M7Tzoie8F3 00:20:10.820 00:20:10.820 real 1m20.915s 00:20:10.820 user 2m6.630s 00:20:10.820 sys 0m26.042s 00:20:10.820 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:10.820 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.820 ************************************ 00:20:10.820 END TEST nvmf_tls 00:20:10.820 ************************************ 00:20:10.820 11:55:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:10.820 11:55:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:10.820 11:55:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:10.820 11:55:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:11.081 ************************************ 00:20:11.081 START TEST nvmf_fips 00:20:11.081 ************************************ 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:11.081 * Looking for test storage... 00:20:11.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:11.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.081 --rc genhtml_branch_coverage=1 00:20:11.081 --rc genhtml_function_coverage=1 00:20:11.081 --rc genhtml_legend=1 00:20:11.081 --rc geninfo_all_blocks=1 00:20:11.081 --rc geninfo_unexecuted_blocks=1 00:20:11.081 00:20:11.081 ' 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:11.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.081 --rc genhtml_branch_coverage=1 00:20:11.081 --rc genhtml_function_coverage=1 00:20:11.081 --rc genhtml_legend=1 00:20:11.081 --rc geninfo_all_blocks=1 00:20:11.081 --rc geninfo_unexecuted_blocks=1 00:20:11.081 00:20:11.081 ' 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:11.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.081 --rc genhtml_branch_coverage=1 00:20:11.081 --rc genhtml_function_coverage=1 00:20:11.081 --rc genhtml_legend=1 00:20:11.081 --rc geninfo_all_blocks=1 00:20:11.081 --rc geninfo_unexecuted_blocks=1 00:20:11.081 00:20:11.081 ' 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:11.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.081 --rc genhtml_branch_coverage=1 00:20:11.081 --rc genhtml_function_coverage=1 00:20:11.081 --rc genhtml_legend=1 00:20:11.081 --rc geninfo_all_blocks=1 00:20:11.081 --rc geninfo_unexecuted_blocks=1 00:20:11.081 00:20:11.081 ' 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:11.081 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # : 0 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:20:11.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@56 -- # have_pci_nics=0 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:11.082 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:11.343 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:11.343 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:11.344 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:11.344 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:11.344 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:11.344 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:11.344 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:11.344 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:11.344 Error setting digest 00:20:11.344 4082D4B3777F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:11.344 4082D4B3777F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:11.344 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:11.344 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:11.344 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:11.344 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:11.344 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:11.344 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:11.344 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:11.344 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:11.344 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:11.344 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:11.344 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.344 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.344 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.344 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:20:11.344 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:20:11.344 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # xtrace_disable 00:20:11.344 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:19.483 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:19.483 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_devs=() 00:20:19.483 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_devs 00:20:19.483 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_net_devs=() 00:20:19.483 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:20:19.483 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # pci_drivers=() 00:20:19.483 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # local -A pci_drivers 00:20:19.483 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # net_devs=() 00:20:19.483 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga net_devs 00:20:19.483 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # e810=() 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga e810 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # x722=() 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga x722 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@323 -- # mlx=() 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@323 -- # local -ga mlx 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:19.484 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:19.484 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:19.484 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:19.484 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # is_hw=yes 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:20:19.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:20:19.484 00:20:19.484 --- 10.0.0.2 ping statistics --- 00:20:19.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.484 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:19.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:20:19.484 00:20:19.484 --- 10.0.0.1 ping statistics --- 00:20:19.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.484 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # return 0 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=74990 00:20:19.484 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 74990 00:20:19.485 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:19.485 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 74990 ']' 00:20:19.485 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.485 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.485 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.485 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.485 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:19.485 [2024-12-09 11:55:26.572389] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:20:19.485 [2024-12-09 11:55:26.572468] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.485 [2024-12-09 11:55:26.670098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.485 [2024-12-09 11:55:26.719544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.485 [2024-12-09 11:55:26.719598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.485 [2024-12-09 11:55:26.719606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.485 [2024-12-09 11:55:26.719613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.485 [2024-12-09 11:55:26.719620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.485 [2024-12-09 11:55:26.720435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.745 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.745 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:19.745 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:19.745 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:19.745 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:19.745 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.745 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:19.745 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:19.745 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:19.745 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.QvC 00:20:19.745 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:19.745 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.QvC 00:20:19.745 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.QvC 00:20:19.745 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.QvC 00:20:19.745 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:19.745 [2024-12-09 11:55:27.595980] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.745 [2024-12-09 11:55:27.611962] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:19.745 [2024-12-09 11:55:27.612269] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.005 malloc0 00:20:20.005 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:20.005 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=75159 00:20:20.005 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 75159 /var/tmp/bdevperf.sock 00:20:20.005 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:20.005 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 75159 ']' 00:20:20.005 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.005 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.006 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.006 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.006 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:20.006 [2024-12-09 11:55:27.752819] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:20:20.006 [2024-12-09 11:55:27.752899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75159 ] 00:20:20.006 [2024-12-09 11:55:27.814931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.006 [2024-12-09 11:55:27.851550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.947 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:20.947 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:20.947 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.QvC 00:20:20.947 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:21.207 [2024-12-09 11:55:28.868168] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:21.207 TLSTESTn1 00:20:21.207 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:21.207 Running I/O for 10 seconds... 00:20:23.536 6086.00 IOPS, 23.77 MiB/s [2024-12-09T10:55:32.365Z] 6264.50 IOPS, 24.47 MiB/s [2024-12-09T10:55:33.306Z] 6288.67 IOPS, 24.57 MiB/s [2024-12-09T10:55:34.245Z] 6160.25 IOPS, 24.06 MiB/s [2024-12-09T10:55:35.185Z] 6016.00 IOPS, 23.50 MiB/s [2024-12-09T10:55:36.126Z] 6094.83 IOPS, 23.81 MiB/s [2024-12-09T10:55:37.510Z] 6042.14 IOPS, 23.60 MiB/s [2024-12-09T10:55:38.080Z] 5984.25 IOPS, 23.38 MiB/s [2024-12-09T10:55:39.462Z] 5885.67 IOPS, 22.99 MiB/s [2024-12-09T10:55:39.462Z] 5935.10 IOPS, 23.18 MiB/s 00:20:31.576 Latency(us) 00:20:31.576 [2024-12-09T10:55:39.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.576 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:31.576 Verification LBA range: start 0x0 length 0x2000 00:20:31.576 TLSTESTn1 : 10.01 5940.98 23.21 0.00 0.00 21515.02 4532.91 27852.80 00:20:31.576 [2024-12-09T10:55:39.462Z] =================================================================================================================== 00:20:31.576 [2024-12-09T10:55:39.462Z] Total : 5940.98 23.21 0.00 0.00 21515.02 4532.91 27852.80 00:20:31.576 { 00:20:31.576 "results": [ 00:20:31.576 { 00:20:31.576 "job": "TLSTESTn1", 00:20:31.576 "core_mask": "0x4", 00:20:31.576 "workload": "verify", 00:20:31.576 "status": "finished", 00:20:31.576 "verify_range": { 00:20:31.576 "start": 0, 00:20:31.576 "length": 8192 00:20:31.576 }, 00:20:31.576 "queue_depth": 128, 00:20:31.576 "io_size": 4096, 00:20:31.576 "runtime": 10.011472, 00:20:31.576 "iops": 5940.984502578642, 00:20:31.576 "mibps": 23.20697071319782, 00:20:31.576 "io_failed": 0, 00:20:31.576 "io_timeout": 0, 00:20:31.576 "avg_latency_us": 21515.020823385676, 00:20:31.576 "min_latency_us": 4532.906666666667, 00:20:31.576 "max_latency_us": 27852.8 00:20:31.576 } 00:20:31.576 ], 00:20:31.576 "core_count": 1 00:20:31.576 } 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:31.576 nvmf_trace.0 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 75159 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 75159 ']' 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 75159 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75159 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75159' 00:20:31.576 killing process with pid 75159 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 75159 00:20:31.576 Received shutdown signal, test time was about 10.000000 seconds 00:20:31.576 00:20:31.576 Latency(us) 00:20:31.576 [2024-12-09T10:55:39.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.576 [2024-12-09T10:55:39.462Z] =================================================================================================================== 00:20:31.576 [2024-12-09T10:55:39.462Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 75159 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # sync 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # set +e 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # for i in {1..20} 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:20:31.576 rmmod nvme_tcp 00:20:31.576 rmmod nvme_fabrics 00:20:31.576 rmmod nvme_keyring 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # set -e 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@130 -- # return 0 00:20:31.576 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 74990 ']' 00:20:31.837 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 74990 00:20:31.837 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 74990 ']' 00:20:31.837 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 74990 00:20:31.837 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:31.837 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.837 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74990 00:20:31.837 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:31.837 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:31.837 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74990' 00:20:31.837 killing process with pid 74990 00:20:31.837 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 74990 00:20:31.837 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 74990 00:20:31.837 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:31.837 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:31.837 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:31.837 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # iptr 00:20:31.837 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:20:31.837 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:31.837 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:20:31.837 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:31.837 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # remove_spdk_ns 00:20:31.837 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.837 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:31.837 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.384 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:20:34.384 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.QvC 00:20:34.384 00:20:34.384 real 0m23.011s 00:20:34.384 user 0m24.979s 00:20:34.384 sys 0m9.291s 00:20:34.384 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:34.384 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:34.384 ************************************ 00:20:34.384 END TEST nvmf_fips 00:20:34.384 ************************************ 00:20:34.384 11:55:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:34.384 11:55:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:34.384 11:55:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:34.384 11:55:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:34.384 ************************************ 00:20:34.385 START TEST nvmf_control_msg_list 00:20:34.385 ************************************ 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:34.385 * Looking for test storage... 00:20:34.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:34.385 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:34.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.385 --rc genhtml_branch_coverage=1 00:20:34.385 --rc genhtml_function_coverage=1 00:20:34.385 --rc genhtml_legend=1 00:20:34.385 --rc geninfo_all_blocks=1 00:20:34.385 --rc geninfo_unexecuted_blocks=1 00:20:34.385 00:20:34.385 ' 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:34.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.385 --rc genhtml_branch_coverage=1 00:20:34.385 --rc genhtml_function_coverage=1 00:20:34.385 --rc genhtml_legend=1 00:20:34.385 --rc geninfo_all_blocks=1 00:20:34.385 --rc geninfo_unexecuted_blocks=1 00:20:34.385 00:20:34.385 ' 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:34.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.385 --rc genhtml_branch_coverage=1 00:20:34.385 --rc genhtml_function_coverage=1 00:20:34.385 --rc genhtml_legend=1 00:20:34.385 --rc geninfo_all_blocks=1 00:20:34.385 --rc geninfo_unexecuted_blocks=1 00:20:34.385 00:20:34.385 ' 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:34.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.385 --rc genhtml_branch_coverage=1 00:20:34.385 --rc genhtml_function_coverage=1 00:20:34.385 --rc genhtml_legend=1 00:20:34.385 --rc geninfo_all_blocks=1 00:20:34.385 --rc geninfo_unexecuted_blocks=1 00:20:34.385 00:20:34.385 ' 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.385 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # : 0 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:20:34.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@56 -- # have_pci_nics=0 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@310 -- # xtrace_disable 00:20:34.386 11:55:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:42.536 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:42.536 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_devs=() 00:20:42.536 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_devs 00:20:42.536 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_net_devs=() 00:20:42.536 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:20:42.536 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@318 -- # pci_drivers=() 00:20:42.536 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@318 -- # local -A pci_drivers 00:20:42.536 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # net_devs=() 00:20:42.536 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga net_devs 00:20:42.536 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # e810=() 00:20:42.536 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga e810 00:20:42.536 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # x722=() 00:20:42.536 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga x722 00:20:42.536 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@323 -- # mlx=() 00:20:42.536 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@323 -- # local -ga mlx 00:20:42.536 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:42.536 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:42.536 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:42.536 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:42.536 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:42.536 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:42.536 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:42.536 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:42.537 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:42.537 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:42.537 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:42.537 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # is_hw=yes 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:20:42.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:42.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:20:42.537 00:20:42.537 --- 10.0.0.2 ping statistics --- 00:20:42.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.537 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:42.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:42.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:20:42.537 00:20:42.537 --- 10.0.0.1 ping statistics --- 00:20:42.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.537 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # return 0 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=81672 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 81672 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 81672 ']' 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.537 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:42.538 [2024-12-09 11:55:49.556577] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:20:42.538 [2024-12-09 11:55:49.556652] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.538 [2024-12-09 11:55:49.652861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.538 [2024-12-09 11:55:49.703057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.538 [2024-12-09 11:55:49.703109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.538 [2024-12-09 11:55:49.703118] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.538 [2024-12-09 11:55:49.703125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.538 [2024-12-09 11:55:49.703132] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.538 [2024-12-09 11:55:49.703915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.538 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.538 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:42.538 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:42.538 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:42.538 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:42.799 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.799 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:42.799 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:42.800 [2024-12-09 11:55:50.431206] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:42.800 Malloc0 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:42.800 [2024-12-09 11:55:50.485723] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=81861 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=81862 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=81863 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 81861 00:20:42.800 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:42.800 [2024-12-09 11:55:50.566207] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:42.800 [2024-12-09 11:55:50.596246] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:42.800 [2024-12-09 11:55:50.596496] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:44.184 Initializing NVMe Controllers 00:20:44.184 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:44.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:44.184 Initialization complete. Launching workers. 00:20:44.184 ======================================================== 00:20:44.184 Latency(us) 00:20:44.184 Device Information : IOPS MiB/s Average min max 00:20:44.184 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 28.00 0.11 35896.39 331.79 41984.30 00:20:44.184 ======================================================== 00:20:44.184 Total : 28.00 0.11 35896.39 331.79 41984.30 00:20:44.184 00:20:44.184 Initializing NVMe Controllers 00:20:44.184 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:44.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:44.184 Initialization complete. Launching workers. 00:20:44.184 ======================================================== 00:20:44.184 Latency(us) 00:20:44.184 Device Information : IOPS MiB/s Average min max 00:20:44.184 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 26.00 0.10 39381.14 471.87 41433.77 00:20:44.184 ======================================================== 00:20:44.184 Total : 26.00 0.10 39381.14 471.87 41433.77 00:20:44.184 00:20:44.184 11:55:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 81862 00:20:44.184 Initializing NVMe Controllers 00:20:44.184 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:44.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:44.184 Initialization complete. Launching workers. 00:20:44.184 ======================================================== 00:20:44.184 Latency(us) 00:20:44.184 Device Information : IOPS MiB/s Average min max 00:20:44.184 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40904.38 40830.43 40949.15 00:20:44.184 ======================================================== 00:20:44.184 Total : 25.00 0.10 40904.38 40830.43 40949.15 00:20:44.184 00:20:44.184 11:55:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 81863 00:20:44.184 11:55:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:44.184 11:55:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:44.184 11:55:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:44.184 11:55:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@122 -- # sync 00:20:44.184 11:55:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:20:44.184 11:55:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # set +e 00:20:44.184 11:55:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # for i in {1..20} 00:20:44.184 11:55:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:20:44.184 rmmod nvme_tcp 00:20:44.184 rmmod nvme_fabrics 00:20:44.184 rmmod nvme_keyring 00:20:44.184 11:55:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:20:44.184 11:55:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # set -e 00:20:44.184 11:55:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@130 -- # return 0 00:20:44.184 11:55:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 81672 ']' 00:20:44.184 11:55:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 81672 00:20:44.184 11:55:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 81672 ']' 00:20:44.184 11:55:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 81672 00:20:44.184 11:55:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:44.184 11:55:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.184 11:55:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81672 00:20:44.184 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:44.184 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:44.184 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81672' 00:20:44.184 killing process with pid 81672 00:20:44.184 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 81672 00:20:44.184 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 81672 00:20:44.445 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:44.445 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:44.445 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:44.445 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # iptr 00:20:44.445 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:20:44.445 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:44.445 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:20:44.445 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:44.445 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # remove_spdk_ns 00:20:44.445 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.445 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.445 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.360 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:20:46.360 00:20:46.360 real 0m12.430s 00:20:46.360 user 0m8.193s 00:20:46.360 sys 0m6.519s 00:20:46.360 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.360 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:46.360 ************************************ 00:20:46.360 END TEST nvmf_control_msg_list 00:20:46.360 ************************************ 00:20:46.621 11:55:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:46.621 11:55:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:46.621 11:55:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.621 11:55:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:46.621 ************************************ 00:20:46.621 START TEST nvmf_wait_for_buf 00:20:46.621 ************************************ 00:20:46.621 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:46.621 * Looking for test storage... 00:20:46.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:46.621 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:46.622 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:20:46.622 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:46.622 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:46.622 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:46.622 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:46.622 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:46.622 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:46.622 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:46.622 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:46.622 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:46.622 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:46.622 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:46.622 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:46.622 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:46.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.884 --rc genhtml_branch_coverage=1 00:20:46.884 --rc genhtml_function_coverage=1 00:20:46.884 --rc genhtml_legend=1 00:20:46.884 --rc geninfo_all_blocks=1 00:20:46.884 --rc geninfo_unexecuted_blocks=1 00:20:46.884 00:20:46.884 ' 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:46.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.884 --rc genhtml_branch_coverage=1 00:20:46.884 --rc genhtml_function_coverage=1 00:20:46.884 --rc genhtml_legend=1 00:20:46.884 --rc geninfo_all_blocks=1 00:20:46.884 --rc geninfo_unexecuted_blocks=1 00:20:46.884 00:20:46.884 ' 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:46.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.884 --rc genhtml_branch_coverage=1 00:20:46.884 --rc genhtml_function_coverage=1 00:20:46.884 --rc genhtml_legend=1 00:20:46.884 --rc geninfo_all_blocks=1 00:20:46.884 --rc geninfo_unexecuted_blocks=1 00:20:46.884 00:20:46.884 ' 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:46.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.884 --rc genhtml_branch_coverage=1 00:20:46.884 --rc genhtml_function_coverage=1 00:20:46.884 --rc genhtml_legend=1 00:20:46.884 --rc geninfo_all_blocks=1 00:20:46.884 --rc geninfo_unexecuted_blocks=1 00:20:46.884 00:20:46.884 ' 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.884 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # : 0 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:20:46.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@56 -- # have_pci_nics=0 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@310 -- # xtrace_disable 00:20:46.885 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_devs=() 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_devs 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_net_devs=() 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@318 -- # pci_drivers=() 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@318 -- # local -A pci_drivers 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # net_devs=() 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga net_devs 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # e810=() 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga e810 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # x722=() 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga x722 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@323 -- # mlx=() 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@323 -- # local -ga mlx 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:55.031 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:55.031 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:55.031 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:55.031 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # is_hw=yes 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:20:55.031 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:20:55.032 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:55.032 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:55.032 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:55.032 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:20:55.032 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:20:55.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:20:55.032 00:20:55.032 --- 10.0.0.2 ping statistics --- 00:20:55.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.032 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:55.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:20:55.032 00:20:55.032 --- 10.0.0.1 ping statistics --- 00:20:55.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.032 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # return 0 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=86469 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 86469 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 86469 ']' 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:55.032 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:55.032 [2024-12-09 11:56:02.170154] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:20:55.032 [2024-12-09 11:56:02.170253] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.032 [2024-12-09 11:56:02.253853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.032 [2024-12-09 11:56:02.304803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.032 [2024-12-09 11:56:02.304857] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.032 [2024-12-09 11:56:02.304866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.032 [2024-12-09 11:56:02.304874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.032 [2024-12-09 11:56:02.304881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.032 [2024-12-09 11:56:02.305651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.293 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.293 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:55.293 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:55.293 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:55.293 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:55.293 Malloc0 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:55.293 [2024-12-09 11:56:03.138718] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.293 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:55.294 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.294 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:55.294 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.294 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:55.294 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.294 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:55.294 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.294 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:55.294 [2024-12-09 11:56:03.162998] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.294 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.294 11:56:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:55.555 [2024-12-09 11:56:03.266751] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:56.944 Initializing NVMe Controllers 00:20:56.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:56.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:56.944 Initialization complete. Launching workers. 00:20:56.944 ======================================================== 00:20:56.944 Latency(us) 00:20:56.944 Device Information : IOPS MiB/s Average min max 00:20:56.944 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 49.00 6.12 85834.42 31921.72 151664.73 00:20:56.944 ======================================================== 00:20:56.945 Total : 49.00 6.12 85834.42 31921.72 151664.73 00:20:56.945 00:20:56.945 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:56.945 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:56.945 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.945 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:56.945 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.945 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=758 00:20:56.945 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 758 -eq 0 ]] 00:20:56.945 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:56.945 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:56.945 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:56.945 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@122 -- # sync 00:20:56.945 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:20:56.945 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # set +e 00:20:56.945 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # for i in {1..20} 00:20:56.945 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:20:56.945 rmmod nvme_tcp 00:20:56.945 rmmod nvme_fabrics 00:20:57.206 rmmod nvme_keyring 00:20:57.206 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:20:57.206 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # set -e 00:20:57.206 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@130 -- # return 0 00:20:57.206 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 86469 ']' 00:20:57.206 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 86469 00:20:57.206 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 86469 ']' 00:20:57.206 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 86469 00:20:57.206 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:57.206 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.206 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86469 00:20:57.206 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:57.206 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:57.206 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86469' 00:20:57.206 killing process with pid 86469 00:20:57.206 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 86469 00:20:57.206 11:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 86469 00:20:57.206 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:57.206 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:57.206 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:57.206 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # iptr 00:20:57.206 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:20:57.206 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:20:57.206 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:57.206 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:57.206 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # remove_spdk_ns 00:20:57.206 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.206 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.206 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.753 11:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:20:59.753 00:20:59.753 real 0m12.844s 00:20:59.753 user 0m5.210s 00:20:59.753 sys 0m6.230s 00:20:59.753 11:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:59.753 11:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:59.753 ************************************ 00:20:59.753 END TEST nvmf_wait_for_buf 00:20:59.753 ************************************ 00:20:59.753 11:56:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:59.753 11:56:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:59.753 11:56:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:59.753 11:56:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:59.753 11:56:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # xtrace_disable 00:20:59.753 11:56:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:06.346 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:06.346 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_devs=() 00:21:06.346 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_devs 00:21:06.346 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_net_devs=() 00:21:06.346 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:21:06.346 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # pci_drivers=() 00:21:06.346 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # local -A pci_drivers 00:21:06.346 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # net_devs=() 00:21:06.346 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga net_devs 00:21:06.346 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # e810=() 00:21:06.346 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga e810 00:21:06.346 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # x722=() 00:21:06.346 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga x722 00:21:06.346 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@323 -- # mlx=() 00:21:06.346 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@323 -- # local -ga mlx 00:21:06.346 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:06.346 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:06.346 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:06.346 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:06.346 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:06.346 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:06.346 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:06.347 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:06.347 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:06.347 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:06.347 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:06.347 ************************************ 00:21:06.347 START TEST nvmf_perf_adq 00:21:06.347 ************************************ 00:21:06.347 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:06.347 * Looking for test storage... 00:21:06.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:06.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.347 --rc genhtml_branch_coverage=1 00:21:06.347 --rc genhtml_function_coverage=1 00:21:06.347 --rc genhtml_legend=1 00:21:06.347 --rc geninfo_all_blocks=1 00:21:06.347 --rc geninfo_unexecuted_blocks=1 00:21:06.347 00:21:06.347 ' 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:06.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.347 --rc genhtml_branch_coverage=1 00:21:06.347 --rc genhtml_function_coverage=1 00:21:06.347 --rc genhtml_legend=1 00:21:06.347 --rc geninfo_all_blocks=1 00:21:06.347 --rc geninfo_unexecuted_blocks=1 00:21:06.347 00:21:06.347 ' 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:06.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.347 --rc genhtml_branch_coverage=1 00:21:06.347 --rc genhtml_function_coverage=1 00:21:06.347 --rc genhtml_legend=1 00:21:06.347 --rc geninfo_all_blocks=1 00:21:06.347 --rc geninfo_unexecuted_blocks=1 00:21:06.347 00:21:06.347 ' 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:06.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.347 --rc genhtml_branch_coverage=1 00:21:06.347 --rc genhtml_function_coverage=1 00:21:06.347 --rc genhtml_legend=1 00:21:06.347 --rc geninfo_all_blocks=1 00:21:06.347 --rc geninfo_unexecuted_blocks=1 00:21:06.347 00:21:06.347 ' 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.347 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # : 0 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:21:06.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@56 -- # have_pci_nics=0 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # xtrace_disable 00:21:06.348 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_devs=() 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_devs 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_net_devs=() 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # pci_drivers=() 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # local -A pci_drivers 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # net_devs=() 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga net_devs 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # e810=() 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga e810 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # x722=() 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga x722 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@323 -- # mlx=() 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@323 -- # local -ga mlx 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:14.492 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:14.492 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.492 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:14.493 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:14.493 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:14.493 11:56:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:15.063 11:56:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:17.610 11:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:22.904 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:22.904 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:22.904 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.904 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:22.904 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:22.904 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:22.904 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.904 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.904 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.904 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:21:22.904 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:21:22.904 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # xtrace_disable 00:21:22.904 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:22.904 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_devs=() 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_devs 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_net_devs=() 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # pci_drivers=() 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # local -A pci_drivers 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # net_devs=() 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga net_devs 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # e810=() 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga e810 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # x722=() 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga x722 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@323 -- # mlx=() 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@323 -- # local -ga mlx 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:21:22.904 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:22.905 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:22.905 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:22.905 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:22.905 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:21:22.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:21:22.905 00:21:22.905 --- 10.0.0.2 ping statistics --- 00:21:22.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.905 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:22.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:21:22.905 00:21:22.905 --- 10.0.0.1 ping statistics --- 00:21:22.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.905 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=96477 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 96477 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 96477 ']' 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:22.905 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:22.905 [2024-12-09 11:56:30.422846] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:21:22.905 [2024-12-09 11:56:30.422914] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.905 [2024-12-09 11:56:30.524471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:22.905 [2024-12-09 11:56:30.579937] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.905 [2024-12-09 11:56:30.579990] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.905 [2024-12-09 11:56:30.579999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:22.905 [2024-12-09 11:56:30.580006] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:22.906 [2024-12-09 11:56:30.580013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.906 [2024-12-09 11:56:30.581986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.906 [2024-12-09 11:56:30.582118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.906 [2024-12-09 11:56:30.582287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.906 [2024-12-09 11:56:30.582288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:23.490 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.490 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:23.490 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:23.490 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:23.490 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.490 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.490 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:23.490 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:23.490 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:23.490 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.490 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.490 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.490 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:23.490 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:23.490 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.490 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.490 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.490 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:23.490 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.490 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.750 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.750 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:23.750 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.750 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.750 [2024-12-09 11:56:31.400904] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.750 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.750 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:23.750 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.750 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.750 Malloc1 00:21:23.750 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.750 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.750 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.750 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.750 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.750 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:23.751 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.751 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.751 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.751 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:23.751 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.751 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.751 [2024-12-09 11:56:31.469077] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.751 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.751 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=96815 00:21:23.751 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:23.751 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:25.662 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:25.662 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.662 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.662 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.662 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:25.662 "tick_rate": 2400000000, 00:21:25.662 "poll_groups": [ 00:21:25.662 { 00:21:25.662 "name": "nvmf_tgt_poll_group_000", 00:21:25.662 "admin_qpairs": 1, 00:21:25.662 "io_qpairs": 1, 00:21:25.662 "current_admin_qpairs": 1, 00:21:25.662 "current_io_qpairs": 1, 00:21:25.662 "pending_bdev_io": 0, 00:21:25.662 "completed_nvme_io": 19686, 00:21:25.662 "transports": [ 00:21:25.662 { 00:21:25.662 "trtype": "TCP" 00:21:25.662 } 00:21:25.662 ] 00:21:25.662 }, 00:21:25.662 { 00:21:25.662 "name": "nvmf_tgt_poll_group_001", 00:21:25.662 "admin_qpairs": 0, 00:21:25.662 "io_qpairs": 1, 00:21:25.662 "current_admin_qpairs": 0, 00:21:25.662 "current_io_qpairs": 1, 00:21:25.662 "pending_bdev_io": 0, 00:21:25.662 "completed_nvme_io": 28093, 00:21:25.662 "transports": [ 00:21:25.662 { 00:21:25.662 "trtype": "TCP" 00:21:25.662 } 00:21:25.662 ] 00:21:25.662 }, 00:21:25.662 { 00:21:25.662 "name": "nvmf_tgt_poll_group_002", 00:21:25.662 "admin_qpairs": 0, 00:21:25.662 "io_qpairs": 1, 00:21:25.662 "current_admin_qpairs": 0, 00:21:25.662 "current_io_qpairs": 1, 00:21:25.662 "pending_bdev_io": 0, 00:21:25.662 "completed_nvme_io": 20515, 00:21:25.662 "transports": [ 00:21:25.662 { 00:21:25.662 "trtype": "TCP" 00:21:25.662 } 00:21:25.662 ] 00:21:25.662 }, 00:21:25.662 { 00:21:25.662 "name": "nvmf_tgt_poll_group_003", 00:21:25.662 "admin_qpairs": 0, 00:21:25.662 "io_qpairs": 1, 00:21:25.662 "current_admin_qpairs": 0, 00:21:25.662 "current_io_qpairs": 1, 00:21:25.662 "pending_bdev_io": 0, 00:21:25.662 "completed_nvme_io": 20079, 00:21:25.662 "transports": [ 00:21:25.662 { 00:21:25.662 "trtype": "TCP" 00:21:25.662 } 00:21:25.662 ] 00:21:25.662 } 00:21:25.662 ] 00:21:25.662 }' 00:21:25.662 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:25.662 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:25.923 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:25.923 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:25.923 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 96815 00:21:34.056 Initializing NVMe Controllers 00:21:34.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:34.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:34.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:34.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:34.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:34.056 Initialization complete. Launching workers. 00:21:34.056 ======================================================== 00:21:34.056 Latency(us) 00:21:34.056 Device Information : IOPS MiB/s Average min max 00:21:34.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12454.50 48.65 5139.40 1339.16 8991.82 00:21:34.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14657.30 57.26 4366.50 1250.15 9723.15 00:21:34.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14068.60 54.96 4556.49 981.58 44520.54 00:21:34.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13203.60 51.58 4846.83 1225.55 11181.28 00:21:34.056 ======================================================== 00:21:34.056 Total : 54384.00 212.44 4709.27 981.58 44520.54 00:21:34.056 00:21:34.056 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:34.056 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:34.056 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # sync 00:21:34.056 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:21:34.056 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # set +e 00:21:34.056 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # for i in {1..20} 00:21:34.056 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:21:34.056 rmmod nvme_tcp 00:21:34.056 rmmod nvme_fabrics 00:21:34.056 rmmod nvme_keyring 00:21:34.056 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:21:34.056 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # set -e 00:21:34.056 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@130 -- # return 0 00:21:34.056 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 96477 ']' 00:21:34.056 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 96477 00:21:34.056 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 96477 ']' 00:21:34.056 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 96477 00:21:34.056 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:34.056 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.056 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96477 00:21:34.056 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:34.056 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:34.056 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96477' 00:21:34.056 killing process with pid 96477 00:21:34.056 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 96477 00:21:34.056 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 96477 00:21:34.316 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:34.316 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:34.316 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:34.316 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # iptr 00:21:34.316 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:21:34.316 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:34.316 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:21:34.316 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:34.316 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # remove_spdk_ns 00:21:34.316 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.316 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.316 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.226 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:21:36.226 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:36.226 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:36.226 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:38.136 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:40.048 11:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # xtrace_disable 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_devs=() 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_devs 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_net_devs=() 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # pci_drivers=() 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # local -A pci_drivers 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # net_devs=() 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga net_devs 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # e810=() 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga e810 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # x722=() 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga x722 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@323 -- # mlx=() 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@323 -- # local -ga mlx 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.334 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:45.335 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:45.335 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:45.335 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:45.335 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:45.335 11:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:21:45.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:21:45.335 00:21:45.335 --- 10.0.0.2 ping statistics --- 00:21:45.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.335 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:21:45.335 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:45.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:21:45.335 00:21:45.335 --- 10.0.0.1 ping statistics --- 00:21:45.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.335 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:21:45.335 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.335 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:21:45.335 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:45.335 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.335 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:45.335 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:45.335 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.335 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:45.335 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:45.335 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:45.335 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:45.335 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:45.335 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:45.335 net.core.busy_poll = 1 00:21:45.335 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:45.335 net.core.busy_read = 1 00:21:45.335 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:45.335 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:45.597 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:45.597 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:45.597 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:45.597 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:45.597 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:45.597 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:45.597 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:45.597 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=101437 00:21:45.597 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 101437 00:21:45.597 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:45.597 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 101437 ']' 00:21:45.597 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.597 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:45.597 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.597 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:45.597 11:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:45.597 [2024-12-09 11:56:53.410860] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:21:45.597 [2024-12-09 11:56:53.410930] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.857 [2024-12-09 11:56:53.508353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:45.857 [2024-12-09 11:56:53.562914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.857 [2024-12-09 11:56:53.562970] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.857 [2024-12-09 11:56:53.562979] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.857 [2024-12-09 11:56:53.562986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.857 [2024-12-09 11:56:53.562993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.857 [2024-12-09 11:56:53.565257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.857 [2024-12-09 11:56:53.565390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.857 [2024-12-09 11:56:53.565557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.857 [2024-12-09 11:56:53.565557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:46.427 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:46.427 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:46.427 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:46.427 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:46.427 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:46.427 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.427 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:46.427 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:46.427 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:46.428 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.428 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:46.428 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.428 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:46.428 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:46.428 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.428 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:46.688 [2024-12-09 11:56:54.396710] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:46.688 Malloc1 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:46.688 [2024-12-09 11:56:54.474597] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=101638 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:46.688 11:56:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:49.230 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:49.230 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.230 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.231 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.231 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:49.231 "tick_rate": 2400000000, 00:21:49.231 "poll_groups": [ 00:21:49.231 { 00:21:49.231 "name": "nvmf_tgt_poll_group_000", 00:21:49.231 "admin_qpairs": 1, 00:21:49.231 "io_qpairs": 2, 00:21:49.231 "current_admin_qpairs": 1, 00:21:49.231 "current_io_qpairs": 2, 00:21:49.231 "pending_bdev_io": 0, 00:21:49.231 "completed_nvme_io": 27772, 00:21:49.231 "transports": [ 00:21:49.231 { 00:21:49.231 "trtype": "TCP" 00:21:49.231 } 00:21:49.231 ] 00:21:49.231 }, 00:21:49.231 { 00:21:49.231 "name": "nvmf_tgt_poll_group_001", 00:21:49.231 "admin_qpairs": 0, 00:21:49.231 "io_qpairs": 2, 00:21:49.231 "current_admin_qpairs": 0, 00:21:49.231 "current_io_qpairs": 2, 00:21:49.231 "pending_bdev_io": 0, 00:21:49.231 "completed_nvme_io": 37010, 00:21:49.231 "transports": [ 00:21:49.231 { 00:21:49.231 "trtype": "TCP" 00:21:49.231 } 00:21:49.231 ] 00:21:49.231 }, 00:21:49.231 { 00:21:49.231 "name": "nvmf_tgt_poll_group_002", 00:21:49.231 "admin_qpairs": 0, 00:21:49.231 "io_qpairs": 0, 00:21:49.231 "current_admin_qpairs": 0, 00:21:49.231 "current_io_qpairs": 0, 00:21:49.231 "pending_bdev_io": 0, 00:21:49.231 "completed_nvme_io": 0, 00:21:49.231 "transports": [ 00:21:49.231 { 00:21:49.231 "trtype": "TCP" 00:21:49.231 } 00:21:49.231 ] 00:21:49.231 }, 00:21:49.231 { 00:21:49.231 "name": "nvmf_tgt_poll_group_003", 00:21:49.231 "admin_qpairs": 0, 00:21:49.231 "io_qpairs": 0, 00:21:49.231 "current_admin_qpairs": 0, 00:21:49.231 "current_io_qpairs": 0, 00:21:49.231 "pending_bdev_io": 0, 00:21:49.231 "completed_nvme_io": 0, 00:21:49.231 "transports": [ 00:21:49.231 { 00:21:49.231 "trtype": "TCP" 00:21:49.231 } 00:21:49.231 ] 00:21:49.231 } 00:21:49.231 ] 00:21:49.231 }' 00:21:49.231 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:49.231 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:49.231 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:49.231 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:49.231 11:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 101638 00:21:57.596 Initializing NVMe Controllers 00:21:57.596 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:57.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:57.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:57.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:57.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:57.596 Initialization complete. Launching workers. 00:21:57.596 ======================================================== 00:21:57.596 Latency(us) 00:21:57.596 Device Information : IOPS MiB/s Average min max 00:21:57.596 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8397.30 32.80 7626.65 977.80 50707.70 00:21:57.596 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11106.40 43.38 5779.84 1126.88 49959.24 00:21:57.596 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10044.90 39.24 6372.01 1173.87 51795.49 00:21:57.596 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9391.20 36.68 6822.95 1191.64 50759.11 00:21:57.596 ======================================================== 00:21:57.596 Total : 38939.80 152.11 6582.42 977.80 51795.49 00:21:57.596 00:21:57.596 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:57.596 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:57.596 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # sync 00:21:57.596 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:21:57.596 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # set +e 00:21:57.596 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # for i in {1..20} 00:21:57.596 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:21:57.596 rmmod nvme_tcp 00:21:57.596 rmmod nvme_fabrics 00:21:57.596 rmmod nvme_keyring 00:21:57.596 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:21:57.596 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # set -e 00:21:57.596 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@130 -- # return 0 00:21:57.596 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 101437 ']' 00:21:57.597 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 101437 00:21:57.597 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 101437 ']' 00:21:57.597 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 101437 00:21:57.597 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:57.597 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:57.597 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101437 00:21:57.597 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:57.597 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:57.597 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101437' 00:21:57.597 killing process with pid 101437 00:21:57.597 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 101437 00:21:57.597 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 101437 00:21:57.597 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:57.597 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:57.597 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:57.597 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # iptr 00:21:57.597 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:21:57.597 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:57.597 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:21:57.597 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:57.597 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # remove_spdk_ns 00:21:57.597 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.597 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.597 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.157 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:22:00.157 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:00.157 00:22:00.157 real 0m54.065s 00:22:00.157 user 2m50.112s 00:22:00.157 sys 0m11.511s 00:22:00.157 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:00.157 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:00.157 ************************************ 00:22:00.157 END TEST nvmf_perf_adq 00:22:00.157 ************************************ 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:00.417 ************************************ 00:22:00.417 START TEST nvmf_shutdown 00:22:00.417 ************************************ 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:00.417 * Looking for test storage... 00:22:00.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:00.417 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:00.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.679 --rc genhtml_branch_coverage=1 00:22:00.679 --rc genhtml_function_coverage=1 00:22:00.679 --rc genhtml_legend=1 00:22:00.679 --rc geninfo_all_blocks=1 00:22:00.679 --rc geninfo_unexecuted_blocks=1 00:22:00.679 00:22:00.679 ' 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:00.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.679 --rc genhtml_branch_coverage=1 00:22:00.679 --rc genhtml_function_coverage=1 00:22:00.679 --rc genhtml_legend=1 00:22:00.679 --rc geninfo_all_blocks=1 00:22:00.679 --rc geninfo_unexecuted_blocks=1 00:22:00.679 00:22:00.679 ' 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:00.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.679 --rc genhtml_branch_coverage=1 00:22:00.679 --rc genhtml_function_coverage=1 00:22:00.679 --rc genhtml_legend=1 00:22:00.679 --rc geninfo_all_blocks=1 00:22:00.679 --rc geninfo_unexecuted_blocks=1 00:22:00.679 00:22:00.679 ' 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:00.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.679 --rc genhtml_branch_coverage=1 00:22:00.679 --rc genhtml_function_coverage=1 00:22:00.679 --rc genhtml_legend=1 00:22:00.679 --rc geninfo_all_blocks=1 00:22:00.679 --rc geninfo_unexecuted_blocks=1 00:22:00.679 00:22:00.679 ' 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # : 0 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:22:00.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:22:00.679 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:22:00.680 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:22:00.680 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@56 -- # have_pci_nics=0 00:22:00.680 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:00.680 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:00.680 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:00.680 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:00.680 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:00.680 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:00.680 ************************************ 00:22:00.680 START TEST nvmf_shutdown_tc1 00:22:00.680 ************************************ 00:22:00.680 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:22:00.680 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:00.680 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:00.680 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:00.680 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.680 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:00.680 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:00.680 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:00.680 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.680 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.680 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.680 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:00.680 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:00.680 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # xtrace_disable 00:22:00.680 11:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_devs=() 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_devs 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_net_devs=() 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # pci_drivers=() 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # local -A pci_drivers 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # net_devs=() 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga net_devs 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # e810=() 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga e810 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # x722=() 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga x722 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # mlx=() 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # local -ga mlx 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:08.822 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:08.822 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:08.822 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:08.822 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # is_hw=yes 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.822 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:22:08.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:22:08.823 00:22:08.823 --- 10.0.0.2 ping statistics --- 00:22:08.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.823 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:22:08.823 00:22:08.823 --- 10.0.0.1 ping statistics --- 00:22:08.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.823 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # return 0 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # nvmfpid=108685 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # waitforlisten 108685 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 108685 ']' 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.823 11:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:08.823 [2024-12-09 11:57:15.980480] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:22:08.823 [2024-12-09 11:57:15.980548] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.823 [2024-12-09 11:57:16.081004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:08.823 [2024-12-09 11:57:16.132805] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.823 [2024-12-09 11:57:16.132860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.823 [2024-12-09 11:57:16.132868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:08.823 [2024-12-09 11:57:16.132876] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:08.823 [2024-12-09 11:57:16.132882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.823 [2024-12-09 11:57:16.134895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.823 [2024-12-09 11:57:16.135063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:08.823 [2024-12-09 11:57:16.135232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.823 [2024-12-09 11:57:16.135233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:09.084 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:09.084 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:09.084 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:09.084 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:09.084 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:09.084 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.084 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:09.084 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.084 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:09.084 [2024-12-09 11:57:16.840169] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.084 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.084 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:09.084 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:09.084 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:09.084 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:09.084 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:09.084 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.084 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:09.084 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.084 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:09.084 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.084 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:09.084 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.084 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:09.085 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.085 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:09.085 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.085 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:09.085 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.085 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:09.085 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.085 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:09.085 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.085 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:09.085 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.085 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:09.085 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:09.085 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.085 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:09.085 Malloc1 00:22:09.085 [2024-12-09 11:57:16.957848] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.344 Malloc2 00:22:09.344 Malloc3 00:22:09.344 Malloc4 00:22:09.344 Malloc5 00:22:09.344 Malloc6 00:22:09.344 Malloc7 00:22:09.344 Malloc8 00:22:09.606 Malloc9 00:22:09.606 Malloc10 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=109069 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 109069 /var/tmp/bdevperf.sock 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 109069 ']' 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:09.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:09.606 { 00:22:09.606 "params": { 00:22:09.606 "name": "Nvme$subsystem", 00:22:09.606 "trtype": "$TEST_TRANSPORT", 00:22:09.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.606 "adrfam": "ipv4", 00:22:09.606 "trsvcid": "$NVMF_PORT", 00:22:09.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.606 "hdgst": ${hdgst:-false}, 00:22:09.606 "ddgst": ${ddgst:-false} 00:22:09.606 }, 00:22:09.606 "method": "bdev_nvme_attach_controller" 00:22:09.606 } 00:22:09.606 EOF 00:22:09.606 )") 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:09.606 { 00:22:09.606 "params": { 00:22:09.606 "name": "Nvme$subsystem", 00:22:09.606 "trtype": "$TEST_TRANSPORT", 00:22:09.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.606 "adrfam": "ipv4", 00:22:09.606 "trsvcid": "$NVMF_PORT", 00:22:09.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.606 "hdgst": ${hdgst:-false}, 00:22:09.606 "ddgst": ${ddgst:-false} 00:22:09.606 }, 00:22:09.606 "method": "bdev_nvme_attach_controller" 00:22:09.606 } 00:22:09.606 EOF 00:22:09.606 )") 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:09.606 { 00:22:09.606 "params": { 00:22:09.606 "name": "Nvme$subsystem", 00:22:09.606 "trtype": "$TEST_TRANSPORT", 00:22:09.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.606 "adrfam": "ipv4", 00:22:09.606 "trsvcid": "$NVMF_PORT", 00:22:09.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.606 "hdgst": ${hdgst:-false}, 00:22:09.606 "ddgst": ${ddgst:-false} 00:22:09.606 }, 00:22:09.606 "method": "bdev_nvme_attach_controller" 00:22:09.606 } 00:22:09.606 EOF 00:22:09.606 )") 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:09.606 { 00:22:09.606 "params": { 00:22:09.606 "name": "Nvme$subsystem", 00:22:09.606 "trtype": "$TEST_TRANSPORT", 00:22:09.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.606 "adrfam": "ipv4", 00:22:09.606 "trsvcid": "$NVMF_PORT", 00:22:09.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.606 "hdgst": ${hdgst:-false}, 00:22:09.606 "ddgst": ${ddgst:-false} 00:22:09.606 }, 00:22:09.606 "method": "bdev_nvme_attach_controller" 00:22:09.606 } 00:22:09.606 EOF 00:22:09.606 )") 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:09.606 { 00:22:09.606 "params": { 00:22:09.606 "name": "Nvme$subsystem", 00:22:09.606 "trtype": "$TEST_TRANSPORT", 00:22:09.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.606 "adrfam": "ipv4", 00:22:09.606 "trsvcid": "$NVMF_PORT", 00:22:09.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.606 "hdgst": ${hdgst:-false}, 00:22:09.606 "ddgst": ${ddgst:-false} 00:22:09.606 }, 00:22:09.606 "method": "bdev_nvme_attach_controller" 00:22:09.606 } 00:22:09.606 EOF 00:22:09.606 )") 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:09.606 { 00:22:09.606 "params": { 00:22:09.606 "name": "Nvme$subsystem", 00:22:09.606 "trtype": "$TEST_TRANSPORT", 00:22:09.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.606 "adrfam": "ipv4", 00:22:09.606 "trsvcid": "$NVMF_PORT", 00:22:09.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.606 "hdgst": ${hdgst:-false}, 00:22:09.606 "ddgst": ${ddgst:-false} 00:22:09.606 }, 00:22:09.606 "method": "bdev_nvme_attach_controller" 00:22:09.606 } 00:22:09.606 EOF 00:22:09.606 )") 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:09.606 [2024-12-09 11:57:17.412058] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:22:09.606 [2024-12-09 11:57:17.412110] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:09.606 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:09.606 { 00:22:09.606 "params": { 00:22:09.606 "name": "Nvme$subsystem", 00:22:09.606 "trtype": "$TEST_TRANSPORT", 00:22:09.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.606 "adrfam": "ipv4", 00:22:09.606 "trsvcid": "$NVMF_PORT", 00:22:09.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.606 "hdgst": ${hdgst:-false}, 00:22:09.606 "ddgst": ${ddgst:-false} 00:22:09.606 }, 00:22:09.607 "method": "bdev_nvme_attach_controller" 00:22:09.607 } 00:22:09.607 EOF 00:22:09.607 )") 00:22:09.607 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:09.607 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:09.607 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:09.607 { 00:22:09.607 "params": { 00:22:09.607 "name": "Nvme$subsystem", 00:22:09.607 "trtype": "$TEST_TRANSPORT", 00:22:09.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.607 "adrfam": "ipv4", 00:22:09.607 "trsvcid": "$NVMF_PORT", 00:22:09.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.607 "hdgst": ${hdgst:-false}, 00:22:09.607 "ddgst": ${ddgst:-false} 00:22:09.607 }, 00:22:09.607 "method": "bdev_nvme_attach_controller" 00:22:09.607 } 00:22:09.607 EOF 00:22:09.607 )") 00:22:09.607 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:09.607 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:09.607 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:09.607 { 00:22:09.607 "params": { 00:22:09.607 "name": "Nvme$subsystem", 00:22:09.607 "trtype": "$TEST_TRANSPORT", 00:22:09.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.607 "adrfam": "ipv4", 00:22:09.607 "trsvcid": "$NVMF_PORT", 00:22:09.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.607 "hdgst": ${hdgst:-false}, 00:22:09.607 "ddgst": ${ddgst:-false} 00:22:09.607 }, 00:22:09.607 "method": "bdev_nvme_attach_controller" 00:22:09.607 } 00:22:09.607 EOF 00:22:09.607 )") 00:22:09.607 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:09.607 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:09.607 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:09.607 { 00:22:09.607 "params": { 00:22:09.607 "name": "Nvme$subsystem", 00:22:09.607 "trtype": "$TEST_TRANSPORT", 00:22:09.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.607 "adrfam": "ipv4", 00:22:09.607 "trsvcid": "$NVMF_PORT", 00:22:09.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.607 "hdgst": ${hdgst:-false}, 00:22:09.607 "ddgst": ${ddgst:-false} 00:22:09.607 }, 00:22:09.607 "method": "bdev_nvme_attach_controller" 00:22:09.607 } 00:22:09.607 EOF 00:22:09.607 )") 00:22:09.607 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:09.607 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:22:09.607 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:22:09.607 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:22:09.607 "params": { 00:22:09.607 "name": "Nvme1", 00:22:09.607 "trtype": "tcp", 00:22:09.607 "traddr": "10.0.0.2", 00:22:09.607 "adrfam": "ipv4", 00:22:09.607 "trsvcid": "4420", 00:22:09.607 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.607 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:09.607 "hdgst": false, 00:22:09.607 "ddgst": false 00:22:09.607 }, 00:22:09.607 "method": "bdev_nvme_attach_controller" 00:22:09.607 },{ 00:22:09.607 "params": { 00:22:09.607 "name": "Nvme2", 00:22:09.607 "trtype": "tcp", 00:22:09.607 "traddr": "10.0.0.2", 00:22:09.607 "adrfam": "ipv4", 00:22:09.607 "trsvcid": "4420", 00:22:09.607 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:09.607 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:09.607 "hdgst": false, 00:22:09.607 "ddgst": false 00:22:09.607 }, 00:22:09.607 "method": "bdev_nvme_attach_controller" 00:22:09.607 },{ 00:22:09.607 "params": { 00:22:09.607 "name": "Nvme3", 00:22:09.607 "trtype": "tcp", 00:22:09.607 "traddr": "10.0.0.2", 00:22:09.607 "adrfam": "ipv4", 00:22:09.607 "trsvcid": "4420", 00:22:09.607 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:09.607 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:09.607 "hdgst": false, 00:22:09.607 "ddgst": false 00:22:09.607 }, 00:22:09.607 "method": "bdev_nvme_attach_controller" 00:22:09.607 },{ 00:22:09.607 "params": { 00:22:09.607 "name": "Nvme4", 00:22:09.607 "trtype": "tcp", 00:22:09.607 "traddr": "10.0.0.2", 00:22:09.607 "adrfam": "ipv4", 00:22:09.607 "trsvcid": "4420", 00:22:09.607 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:09.607 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:09.607 "hdgst": false, 00:22:09.607 "ddgst": false 00:22:09.607 }, 00:22:09.607 "method": "bdev_nvme_attach_controller" 00:22:09.607 },{ 00:22:09.607 "params": { 00:22:09.607 "name": "Nvme5", 00:22:09.607 "trtype": "tcp", 00:22:09.607 "traddr": "10.0.0.2", 00:22:09.607 "adrfam": "ipv4", 00:22:09.607 "trsvcid": "4420", 00:22:09.607 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:09.607 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:09.607 "hdgst": false, 00:22:09.607 "ddgst": false 00:22:09.607 }, 00:22:09.607 "method": "bdev_nvme_attach_controller" 00:22:09.607 },{ 00:22:09.607 "params": { 00:22:09.607 "name": "Nvme6", 00:22:09.607 "trtype": "tcp", 00:22:09.607 "traddr": "10.0.0.2", 00:22:09.607 "adrfam": "ipv4", 00:22:09.607 "trsvcid": "4420", 00:22:09.607 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:09.607 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:09.607 "hdgst": false, 00:22:09.607 "ddgst": false 00:22:09.607 }, 00:22:09.607 "method": "bdev_nvme_attach_controller" 00:22:09.607 },{ 00:22:09.607 "params": { 00:22:09.607 "name": "Nvme7", 00:22:09.607 "trtype": "tcp", 00:22:09.607 "traddr": "10.0.0.2", 00:22:09.607 "adrfam": "ipv4", 00:22:09.607 "trsvcid": "4420", 00:22:09.607 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:09.607 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:09.607 "hdgst": false, 00:22:09.607 "ddgst": false 00:22:09.607 }, 00:22:09.607 "method": "bdev_nvme_attach_controller" 00:22:09.607 },{ 00:22:09.607 "params": { 00:22:09.607 "name": "Nvme8", 00:22:09.607 "trtype": "tcp", 00:22:09.607 "traddr": "10.0.0.2", 00:22:09.607 "adrfam": "ipv4", 00:22:09.607 "trsvcid": "4420", 00:22:09.607 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:09.607 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:09.607 "hdgst": false, 00:22:09.607 "ddgst": false 00:22:09.607 }, 00:22:09.607 "method": "bdev_nvme_attach_controller" 00:22:09.607 },{ 00:22:09.607 "params": { 00:22:09.607 "name": "Nvme9", 00:22:09.607 "trtype": "tcp", 00:22:09.607 "traddr": "10.0.0.2", 00:22:09.607 "adrfam": "ipv4", 00:22:09.607 "trsvcid": "4420", 00:22:09.607 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:09.607 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:09.607 "hdgst": false, 00:22:09.607 "ddgst": false 00:22:09.607 }, 00:22:09.607 "method": "bdev_nvme_attach_controller" 00:22:09.607 },{ 00:22:09.607 "params": { 00:22:09.607 "name": "Nvme10", 00:22:09.607 "trtype": "tcp", 00:22:09.607 "traddr": "10.0.0.2", 00:22:09.607 "adrfam": "ipv4", 00:22:09.607 "trsvcid": "4420", 00:22:09.607 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:09.607 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:09.607 "hdgst": false, 00:22:09.607 "ddgst": false 00:22:09.607 }, 00:22:09.607 "method": "bdev_nvme_attach_controller" 00:22:09.607 }' 00:22:09.867 [2024-12-09 11:57:17.502612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.867 [2024-12-09 11:57:17.539262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.264 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:11.264 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:11.264 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:11.264 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.264 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:11.264 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.264 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 109069 00:22:11.264 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:11.264 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:12.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 109069 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:12.204 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 108685 00:22:12.204 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:12.204 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:12.204 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:22:12.204 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:22:12.204 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:12.204 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:12.204 { 00:22:12.204 "params": { 00:22:12.204 "name": "Nvme$subsystem", 00:22:12.204 "trtype": "$TEST_TRANSPORT", 00:22:12.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.204 "adrfam": "ipv4", 00:22:12.204 "trsvcid": "$NVMF_PORT", 00:22:12.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.204 "hdgst": ${hdgst:-false}, 00:22:12.204 "ddgst": ${ddgst:-false} 00:22:12.204 }, 00:22:12.204 "method": "bdev_nvme_attach_controller" 00:22:12.204 } 00:22:12.204 EOF 00:22:12.204 )") 00:22:12.204 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:12.204 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:12.204 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:12.204 { 00:22:12.204 "params": { 00:22:12.204 "name": "Nvme$subsystem", 00:22:12.204 "trtype": "$TEST_TRANSPORT", 00:22:12.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.204 "adrfam": "ipv4", 00:22:12.204 "trsvcid": "$NVMF_PORT", 00:22:12.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.204 "hdgst": ${hdgst:-false}, 00:22:12.204 "ddgst": ${ddgst:-false} 00:22:12.204 }, 00:22:12.204 "method": "bdev_nvme_attach_controller" 00:22:12.204 } 00:22:12.204 EOF 00:22:12.204 )") 00:22:12.204 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:12.204 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:12.204 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:12.204 { 00:22:12.204 "params": { 00:22:12.204 "name": "Nvme$subsystem", 00:22:12.204 "trtype": "$TEST_TRANSPORT", 00:22:12.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.204 "adrfam": "ipv4", 00:22:12.204 "trsvcid": "$NVMF_PORT", 00:22:12.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.204 "hdgst": ${hdgst:-false}, 00:22:12.204 "ddgst": ${ddgst:-false} 00:22:12.204 }, 00:22:12.204 "method": "bdev_nvme_attach_controller" 00:22:12.204 } 00:22:12.204 EOF 00:22:12.204 )") 00:22:12.205 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:12.205 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:12.205 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:12.205 { 00:22:12.205 "params": { 00:22:12.205 "name": "Nvme$subsystem", 00:22:12.205 "trtype": "$TEST_TRANSPORT", 00:22:12.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.205 "adrfam": "ipv4", 00:22:12.205 "trsvcid": "$NVMF_PORT", 00:22:12.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.205 "hdgst": ${hdgst:-false}, 00:22:12.205 "ddgst": ${ddgst:-false} 00:22:12.205 }, 00:22:12.205 "method": "bdev_nvme_attach_controller" 00:22:12.205 } 00:22:12.205 EOF 00:22:12.205 )") 00:22:12.205 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:12.205 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:12.205 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:12.205 { 00:22:12.205 "params": { 00:22:12.205 "name": "Nvme$subsystem", 00:22:12.205 "trtype": "$TEST_TRANSPORT", 00:22:12.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.205 "adrfam": "ipv4", 00:22:12.205 "trsvcid": "$NVMF_PORT", 00:22:12.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.205 "hdgst": ${hdgst:-false}, 00:22:12.205 "ddgst": ${ddgst:-false} 00:22:12.205 }, 00:22:12.205 "method": "bdev_nvme_attach_controller" 00:22:12.205 } 00:22:12.205 EOF 00:22:12.205 )") 00:22:12.205 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:12.205 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:12.205 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:12.205 { 00:22:12.205 "params": { 00:22:12.205 "name": "Nvme$subsystem", 00:22:12.205 "trtype": "$TEST_TRANSPORT", 00:22:12.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.205 "adrfam": "ipv4", 00:22:12.205 "trsvcid": "$NVMF_PORT", 00:22:12.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.205 "hdgst": ${hdgst:-false}, 00:22:12.205 "ddgst": ${ddgst:-false} 00:22:12.205 }, 00:22:12.205 "method": "bdev_nvme_attach_controller" 00:22:12.205 } 00:22:12.205 EOF 00:22:12.205 )") 00:22:12.205 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:12.205 [2024-12-09 11:57:19.990496] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:22:12.205 [2024-12-09 11:57:19.990553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109640 ] 00:22:12.205 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:12.205 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:12.205 { 00:22:12.205 "params": { 00:22:12.205 "name": "Nvme$subsystem", 00:22:12.205 "trtype": "$TEST_TRANSPORT", 00:22:12.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.205 "adrfam": "ipv4", 00:22:12.205 "trsvcid": "$NVMF_PORT", 00:22:12.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.205 "hdgst": ${hdgst:-false}, 00:22:12.205 "ddgst": ${ddgst:-false} 00:22:12.205 }, 00:22:12.205 "method": "bdev_nvme_attach_controller" 00:22:12.205 } 00:22:12.205 EOF 00:22:12.205 )") 00:22:12.205 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:12.205 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:12.205 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:12.205 { 00:22:12.205 "params": { 00:22:12.205 "name": "Nvme$subsystem", 00:22:12.205 "trtype": "$TEST_TRANSPORT", 00:22:12.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.205 "adrfam": "ipv4", 00:22:12.205 "trsvcid": "$NVMF_PORT", 00:22:12.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.205 "hdgst": ${hdgst:-false}, 00:22:12.205 "ddgst": ${ddgst:-false} 00:22:12.205 }, 00:22:12.205 "method": "bdev_nvme_attach_controller" 00:22:12.205 } 00:22:12.205 EOF 00:22:12.205 )") 00:22:12.205 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:12.205 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:12.205 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:12.205 { 00:22:12.205 "params": { 00:22:12.205 "name": "Nvme$subsystem", 00:22:12.205 "trtype": "$TEST_TRANSPORT", 00:22:12.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.205 "adrfam": "ipv4", 00:22:12.205 "trsvcid": "$NVMF_PORT", 00:22:12.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.205 "hdgst": ${hdgst:-false}, 00:22:12.205 "ddgst": ${ddgst:-false} 00:22:12.205 }, 00:22:12.205 "method": "bdev_nvme_attach_controller" 00:22:12.205 } 00:22:12.205 EOF 00:22:12.205 )") 00:22:12.205 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:12.205 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:12.205 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:12.205 { 00:22:12.205 "params": { 00:22:12.205 "name": "Nvme$subsystem", 00:22:12.205 "trtype": "$TEST_TRANSPORT", 00:22:12.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.205 "adrfam": "ipv4", 00:22:12.205 "trsvcid": "$NVMF_PORT", 00:22:12.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.205 "hdgst": ${hdgst:-false}, 00:22:12.205 "ddgst": ${ddgst:-false} 00:22:12.205 }, 00:22:12.205 "method": "bdev_nvme_attach_controller" 00:22:12.205 } 00:22:12.205 EOF 00:22:12.205 )") 00:22:12.205 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:22:12.205 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:22:12.205 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:22:12.205 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:22:12.205 "params": { 00:22:12.205 "name": "Nvme1", 00:22:12.205 "trtype": "tcp", 00:22:12.205 "traddr": "10.0.0.2", 00:22:12.205 "adrfam": "ipv4", 00:22:12.205 "trsvcid": "4420", 00:22:12.205 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.205 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:12.205 "hdgst": false, 00:22:12.205 "ddgst": false 00:22:12.205 }, 00:22:12.205 "method": "bdev_nvme_attach_controller" 00:22:12.205 },{ 00:22:12.205 "params": { 00:22:12.205 "name": "Nvme2", 00:22:12.205 "trtype": "tcp", 00:22:12.205 "traddr": "10.0.0.2", 00:22:12.205 "adrfam": "ipv4", 00:22:12.205 "trsvcid": "4420", 00:22:12.205 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:12.205 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:12.205 "hdgst": false, 00:22:12.205 "ddgst": false 00:22:12.205 }, 00:22:12.205 "method": "bdev_nvme_attach_controller" 00:22:12.205 },{ 00:22:12.205 "params": { 00:22:12.205 "name": "Nvme3", 00:22:12.205 "trtype": "tcp", 00:22:12.205 "traddr": "10.0.0.2", 00:22:12.205 "adrfam": "ipv4", 00:22:12.205 "trsvcid": "4420", 00:22:12.205 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:12.205 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:12.205 "hdgst": false, 00:22:12.205 "ddgst": false 00:22:12.205 }, 00:22:12.205 "method": "bdev_nvme_attach_controller" 00:22:12.205 },{ 00:22:12.205 "params": { 00:22:12.205 "name": "Nvme4", 00:22:12.205 "trtype": "tcp", 00:22:12.205 "traddr": "10.0.0.2", 00:22:12.205 "adrfam": "ipv4", 00:22:12.205 "trsvcid": "4420", 00:22:12.205 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:12.205 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:12.205 "hdgst": false, 00:22:12.205 "ddgst": false 00:22:12.205 }, 00:22:12.205 "method": "bdev_nvme_attach_controller" 00:22:12.205 },{ 00:22:12.205 "params": { 00:22:12.205 "name": "Nvme5", 00:22:12.205 "trtype": "tcp", 00:22:12.205 "traddr": "10.0.0.2", 00:22:12.205 "adrfam": "ipv4", 00:22:12.205 "trsvcid": "4420", 00:22:12.205 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:12.205 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:12.205 "hdgst": false, 00:22:12.205 "ddgst": false 00:22:12.205 }, 00:22:12.205 "method": "bdev_nvme_attach_controller" 00:22:12.205 },{ 00:22:12.205 "params": { 00:22:12.205 "name": "Nvme6", 00:22:12.205 "trtype": "tcp", 00:22:12.205 "traddr": "10.0.0.2", 00:22:12.205 "adrfam": "ipv4", 00:22:12.205 "trsvcid": "4420", 00:22:12.205 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:12.205 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:12.205 "hdgst": false, 00:22:12.205 "ddgst": false 00:22:12.205 }, 00:22:12.205 "method": "bdev_nvme_attach_controller" 00:22:12.205 },{ 00:22:12.205 "params": { 00:22:12.205 "name": "Nvme7", 00:22:12.205 "trtype": "tcp", 00:22:12.205 "traddr": "10.0.0.2", 00:22:12.205 "adrfam": "ipv4", 00:22:12.205 "trsvcid": "4420", 00:22:12.206 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:12.206 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:12.206 "hdgst": false, 00:22:12.206 "ddgst": false 00:22:12.206 }, 00:22:12.206 "method": "bdev_nvme_attach_controller" 00:22:12.206 },{ 00:22:12.206 "params": { 00:22:12.206 "name": "Nvme8", 00:22:12.206 "trtype": "tcp", 00:22:12.206 "traddr": "10.0.0.2", 00:22:12.206 "adrfam": "ipv4", 00:22:12.206 "trsvcid": "4420", 00:22:12.206 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:12.206 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:12.206 "hdgst": false, 00:22:12.206 "ddgst": false 00:22:12.206 }, 00:22:12.206 "method": "bdev_nvme_attach_controller" 00:22:12.206 },{ 00:22:12.206 "params": { 00:22:12.206 "name": "Nvme9", 00:22:12.206 "trtype": "tcp", 00:22:12.206 "traddr": "10.0.0.2", 00:22:12.206 "adrfam": "ipv4", 00:22:12.206 "trsvcid": "4420", 00:22:12.206 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:12.206 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:12.206 "hdgst": false, 00:22:12.206 "ddgst": false 00:22:12.206 }, 00:22:12.206 "method": "bdev_nvme_attach_controller" 00:22:12.206 },{ 00:22:12.206 "params": { 00:22:12.206 "name": "Nvme10", 00:22:12.206 "trtype": "tcp", 00:22:12.206 "traddr": "10.0.0.2", 00:22:12.206 "adrfam": "ipv4", 00:22:12.206 "trsvcid": "4420", 00:22:12.206 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:12.206 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:12.206 "hdgst": false, 00:22:12.206 "ddgst": false 00:22:12.206 }, 00:22:12.206 "method": "bdev_nvme_attach_controller" 00:22:12.206 }' 00:22:12.206 [2024-12-09 11:57:20.082759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.465 [2024-12-09 11:57:20.119476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.847 Running I/O for 1 seconds... 00:22:15.052 1871.00 IOPS, 116.94 MiB/s 00:22:15.052 Latency(us) 00:22:15.052 [2024-12-09T10:57:22.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.052 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.052 Verification LBA range: start 0x0 length 0x400 00:22:15.052 Nvme1n1 : 1.08 236.97 14.81 0.00 0.00 267127.47 17367.04 251658.24 00:22:15.052 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.052 Verification LBA range: start 0x0 length 0x400 00:22:15.052 Nvme2n1 : 1.15 226.98 14.19 0.00 0.00 270667.86 10103.47 241172.48 00:22:15.052 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.052 Verification LBA range: start 0x0 length 0x400 00:22:15.052 Nvme3n1 : 1.19 268.12 16.76 0.00 0.00 228495.70 17913.17 246415.36 00:22:15.052 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.052 Verification LBA range: start 0x0 length 0x400 00:22:15.052 Nvme4n1 : 1.09 237.48 14.84 0.00 0.00 250514.30 8137.39 269134.51 00:22:15.052 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.052 Verification LBA range: start 0x0 length 0x400 00:22:15.052 Nvme5n1 : 1.09 236.17 14.76 0.00 0.00 247873.14 4041.39 228939.09 00:22:15.052 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.052 Verification LBA range: start 0x0 length 0x400 00:22:15.052 Nvme6n1 : 1.17 223.12 13.94 0.00 0.00 259592.51 2184.53 262144.00 00:22:15.052 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.052 Verification LBA range: start 0x0 length 0x400 00:22:15.052 Nvme7n1 : 1.20 266.73 16.67 0.00 0.00 214178.82 35607.89 249910.61 00:22:15.052 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.052 Verification LBA range: start 0x0 length 0x400 00:22:15.052 Nvme8n1 : 1.16 221.35 13.83 0.00 0.00 252075.31 12834.13 255153.49 00:22:15.052 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.052 Verification LBA range: start 0x0 length 0x400 00:22:15.052 Nvme9n1 : 1.19 272.10 17.01 0.00 0.00 200918.66 10267.31 244667.73 00:22:15.052 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.052 Verification LBA range: start 0x0 length 0x400 00:22:15.052 Nvme10n1 : 1.21 264.44 16.53 0.00 0.00 204662.19 9175.04 272629.76 00:22:15.052 [2024-12-09T10:57:22.938Z] =================================================================================================================== 00:22:15.052 [2024-12-09T10:57:22.938Z] Total : 2453.45 153.34 0.00 0.00 237172.72 2184.53 272629.76 00:22:15.052 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:15.052 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:15.052 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:15.052 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:15.052 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:15.052 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:15.052 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # sync 00:22:15.052 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:22:15.052 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # set +e 00:22:15.052 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # for i in {1..20} 00:22:15.052 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:22:15.052 rmmod nvme_tcp 00:22:15.052 rmmod nvme_fabrics 00:22:15.052 rmmod nvme_keyring 00:22:15.052 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:22:15.313 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # set -e 00:22:15.313 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@130 -- # return 0 00:22:15.313 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@513 -- # '[' -n 108685 ']' 00:22:15.313 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # killprocess 108685 00:22:15.313 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 108685 ']' 00:22:15.313 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 108685 00:22:15.313 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:15.313 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:15.313 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108685 00:22:15.313 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:15.313 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:15.313 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108685' 00:22:15.313 killing process with pid 108685 00:22:15.313 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 108685 00:22:15.313 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 108685 00:22:15.575 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:15.575 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:15.575 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:15.575 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # iptr 00:22:15.575 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-restore 00:22:15.575 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-save 00:22:15.575 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:15.575 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:15.575 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # remove_spdk_ns 00:22:15.575 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.575 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.575 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.486 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:22:17.486 00:22:17.486 real 0m16.937s 00:22:17.486 user 0m34.605s 00:22:17.486 sys 0m6.896s 00:22:17.486 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:17.486 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:17.486 ************************************ 00:22:17.486 END TEST nvmf_shutdown_tc1 00:22:17.486 ************************************ 00:22:17.486 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:17.486 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:17.486 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:17.486 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:17.747 ************************************ 00:22:17.747 START TEST nvmf_shutdown_tc2 00:22:17.747 ************************************ 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # xtrace_disable 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_devs=() 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_devs 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_net_devs=() 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # pci_drivers=() 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # local -A pci_drivers 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # net_devs=() 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga net_devs 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # e810=() 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga e810 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # x722=() 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga x722 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # mlx=() 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # local -ga mlx 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:22:17.747 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:17.748 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:17.748 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:17.748 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:17.748 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # is_hw=yes 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:22:17.748 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:22:18.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:18.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:22:18.010 00:22:18.010 --- 10.0.0.2 ping statistics --- 00:22:18.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.010 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:18.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:18.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:22:18.010 00:22:18.010 --- 10.0.0.1 ping statistics --- 00:22:18.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.010 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # return 0 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # nvmfpid=110880 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # waitforlisten 110880 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 110880 ']' 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.010 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:18.010 [2024-12-09 11:57:25.830141] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:22:18.010 [2024-12-09 11:57:25.830191] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.270 [2024-12-09 11:57:25.898583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:18.270 [2024-12-09 11:57:25.928069] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.270 [2024-12-09 11:57:25.928099] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.270 [2024-12-09 11:57:25.928105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.270 [2024-12-09 11:57:25.928110] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.270 [2024-12-09 11:57:25.928114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.270 [2024-12-09 11:57:25.929313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:18.270 [2024-12-09 11:57:25.929473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:18.270 [2024-12-09 11:57:25.929631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.270 [2024-12-09 11:57:25.929632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:18.270 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:18.270 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:18.271 [2024-12-09 11:57:26.053286] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.271 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:18.271 Malloc1 00:22:18.531 [2024-12-09 11:57:26.160381] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.531 Malloc2 00:22:18.531 Malloc3 00:22:18.531 Malloc4 00:22:18.531 Malloc5 00:22:18.531 Malloc6 00:22:18.531 Malloc7 00:22:18.531 Malloc8 00:22:18.791 Malloc9 00:22:18.791 Malloc10 00:22:18.791 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.791 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:18.791 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:18.791 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:18.791 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=110955 00:22:18.791 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 110955 /var/tmp/bdevperf.sock 00:22:18.791 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 110955 ']' 00:22:18.791 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.791 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.791 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.791 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:18.791 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.791 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:18.791 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:18.791 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # config=() 00:22:18.791 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # local subsystem config 00:22:18.791 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:18.791 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:18.791 { 00:22:18.791 "params": { 00:22:18.791 "name": "Nvme$subsystem", 00:22:18.791 "trtype": "$TEST_TRANSPORT", 00:22:18.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.791 "adrfam": "ipv4", 00:22:18.791 "trsvcid": "$NVMF_PORT", 00:22:18.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.791 "hdgst": ${hdgst:-false}, 00:22:18.792 "ddgst": ${ddgst:-false} 00:22:18.792 }, 00:22:18.792 "method": "bdev_nvme_attach_controller" 00:22:18.792 } 00:22:18.792 EOF 00:22:18.792 )") 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:18.792 { 00:22:18.792 "params": { 00:22:18.792 "name": "Nvme$subsystem", 00:22:18.792 "trtype": "$TEST_TRANSPORT", 00:22:18.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.792 "adrfam": "ipv4", 00:22:18.792 "trsvcid": "$NVMF_PORT", 00:22:18.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.792 "hdgst": ${hdgst:-false}, 00:22:18.792 "ddgst": ${ddgst:-false} 00:22:18.792 }, 00:22:18.792 "method": "bdev_nvme_attach_controller" 00:22:18.792 } 00:22:18.792 EOF 00:22:18.792 )") 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:18.792 { 00:22:18.792 "params": { 00:22:18.792 "name": "Nvme$subsystem", 00:22:18.792 "trtype": "$TEST_TRANSPORT", 00:22:18.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.792 "adrfam": "ipv4", 00:22:18.792 "trsvcid": "$NVMF_PORT", 00:22:18.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.792 "hdgst": ${hdgst:-false}, 00:22:18.792 "ddgst": ${ddgst:-false} 00:22:18.792 }, 00:22:18.792 "method": "bdev_nvme_attach_controller" 00:22:18.792 } 00:22:18.792 EOF 00:22:18.792 )") 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:18.792 { 00:22:18.792 "params": { 00:22:18.792 "name": "Nvme$subsystem", 00:22:18.792 "trtype": "$TEST_TRANSPORT", 00:22:18.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.792 "adrfam": "ipv4", 00:22:18.792 "trsvcid": "$NVMF_PORT", 00:22:18.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.792 "hdgst": ${hdgst:-false}, 00:22:18.792 "ddgst": ${ddgst:-false} 00:22:18.792 }, 00:22:18.792 "method": "bdev_nvme_attach_controller" 00:22:18.792 } 00:22:18.792 EOF 00:22:18.792 )") 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:18.792 { 00:22:18.792 "params": { 00:22:18.792 "name": "Nvme$subsystem", 00:22:18.792 "trtype": "$TEST_TRANSPORT", 00:22:18.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.792 "adrfam": "ipv4", 00:22:18.792 "trsvcid": "$NVMF_PORT", 00:22:18.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.792 "hdgst": ${hdgst:-false}, 00:22:18.792 "ddgst": ${ddgst:-false} 00:22:18.792 }, 00:22:18.792 "method": "bdev_nvme_attach_controller" 00:22:18.792 } 00:22:18.792 EOF 00:22:18.792 )") 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:18.792 { 00:22:18.792 "params": { 00:22:18.792 "name": "Nvme$subsystem", 00:22:18.792 "trtype": "$TEST_TRANSPORT", 00:22:18.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.792 "adrfam": "ipv4", 00:22:18.792 "trsvcid": "$NVMF_PORT", 00:22:18.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.792 "hdgst": ${hdgst:-false}, 00:22:18.792 "ddgst": ${ddgst:-false} 00:22:18.792 }, 00:22:18.792 "method": "bdev_nvme_attach_controller" 00:22:18.792 } 00:22:18.792 EOF 00:22:18.792 )") 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:22:18.792 [2024-12-09 11:57:26.606049] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:22:18.792 [2024-12-09 11:57:26.606102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110955 ] 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:18.792 { 00:22:18.792 "params": { 00:22:18.792 "name": "Nvme$subsystem", 00:22:18.792 "trtype": "$TEST_TRANSPORT", 00:22:18.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.792 "adrfam": "ipv4", 00:22:18.792 "trsvcid": "$NVMF_PORT", 00:22:18.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.792 "hdgst": ${hdgst:-false}, 00:22:18.792 "ddgst": ${ddgst:-false} 00:22:18.792 }, 00:22:18.792 "method": "bdev_nvme_attach_controller" 00:22:18.792 } 00:22:18.792 EOF 00:22:18.792 )") 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:18.792 { 00:22:18.792 "params": { 00:22:18.792 "name": "Nvme$subsystem", 00:22:18.792 "trtype": "$TEST_TRANSPORT", 00:22:18.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.792 "adrfam": "ipv4", 00:22:18.792 "trsvcid": "$NVMF_PORT", 00:22:18.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.792 "hdgst": ${hdgst:-false}, 00:22:18.792 "ddgst": ${ddgst:-false} 00:22:18.792 }, 00:22:18.792 "method": "bdev_nvme_attach_controller" 00:22:18.792 } 00:22:18.792 EOF 00:22:18.792 )") 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:18.792 { 00:22:18.792 "params": { 00:22:18.792 "name": "Nvme$subsystem", 00:22:18.792 "trtype": "$TEST_TRANSPORT", 00:22:18.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.792 "adrfam": "ipv4", 00:22:18.792 "trsvcid": "$NVMF_PORT", 00:22:18.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.792 "hdgst": ${hdgst:-false}, 00:22:18.792 "ddgst": ${ddgst:-false} 00:22:18.792 }, 00:22:18.792 "method": "bdev_nvme_attach_controller" 00:22:18.792 } 00:22:18.792 EOF 00:22:18.792 )") 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:18.792 { 00:22:18.792 "params": { 00:22:18.792 "name": "Nvme$subsystem", 00:22:18.792 "trtype": "$TEST_TRANSPORT", 00:22:18.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.792 "adrfam": "ipv4", 00:22:18.792 "trsvcid": "$NVMF_PORT", 00:22:18.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.792 "hdgst": ${hdgst:-false}, 00:22:18.792 "ddgst": ${ddgst:-false} 00:22:18.792 }, 00:22:18.792 "method": "bdev_nvme_attach_controller" 00:22:18.792 } 00:22:18.792 EOF 00:22:18.792 )") 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # jq . 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@581 -- # IFS=, 00:22:18.792 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:22:18.792 "params": { 00:22:18.792 "name": "Nvme1", 00:22:18.792 "trtype": "tcp", 00:22:18.792 "traddr": "10.0.0.2", 00:22:18.792 "adrfam": "ipv4", 00:22:18.792 "trsvcid": "4420", 00:22:18.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:18.792 "hdgst": false, 00:22:18.792 "ddgst": false 00:22:18.792 }, 00:22:18.792 "method": "bdev_nvme_attach_controller" 00:22:18.792 },{ 00:22:18.792 "params": { 00:22:18.792 "name": "Nvme2", 00:22:18.792 "trtype": "tcp", 00:22:18.792 "traddr": "10.0.0.2", 00:22:18.792 "adrfam": "ipv4", 00:22:18.792 "trsvcid": "4420", 00:22:18.792 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:18.792 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:18.792 "hdgst": false, 00:22:18.793 "ddgst": false 00:22:18.793 }, 00:22:18.793 "method": "bdev_nvme_attach_controller" 00:22:18.793 },{ 00:22:18.793 "params": { 00:22:18.793 "name": "Nvme3", 00:22:18.793 "trtype": "tcp", 00:22:18.793 "traddr": "10.0.0.2", 00:22:18.793 "adrfam": "ipv4", 00:22:18.793 "trsvcid": "4420", 00:22:18.793 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:18.793 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:18.793 "hdgst": false, 00:22:18.793 "ddgst": false 00:22:18.793 }, 00:22:18.793 "method": "bdev_nvme_attach_controller" 00:22:18.793 },{ 00:22:18.793 "params": { 00:22:18.793 "name": "Nvme4", 00:22:18.793 "trtype": "tcp", 00:22:18.793 "traddr": "10.0.0.2", 00:22:18.793 "adrfam": "ipv4", 00:22:18.793 "trsvcid": "4420", 00:22:18.793 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:18.793 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:18.793 "hdgst": false, 00:22:18.793 "ddgst": false 00:22:18.793 }, 00:22:18.793 "method": "bdev_nvme_attach_controller" 00:22:18.793 },{ 00:22:18.793 "params": { 00:22:18.793 "name": "Nvme5", 00:22:18.793 "trtype": "tcp", 00:22:18.793 "traddr": "10.0.0.2", 00:22:18.793 "adrfam": "ipv4", 00:22:18.793 "trsvcid": "4420", 00:22:18.793 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:18.793 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:18.793 "hdgst": false, 00:22:18.793 "ddgst": false 00:22:18.793 }, 00:22:18.793 "method": "bdev_nvme_attach_controller" 00:22:18.793 },{ 00:22:18.793 "params": { 00:22:18.793 "name": "Nvme6", 00:22:18.793 "trtype": "tcp", 00:22:18.793 "traddr": "10.0.0.2", 00:22:18.793 "adrfam": "ipv4", 00:22:18.793 "trsvcid": "4420", 00:22:18.793 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:18.793 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:18.793 "hdgst": false, 00:22:18.793 "ddgst": false 00:22:18.793 }, 00:22:18.793 "method": "bdev_nvme_attach_controller" 00:22:18.793 },{ 00:22:18.793 "params": { 00:22:18.793 "name": "Nvme7", 00:22:18.793 "trtype": "tcp", 00:22:18.793 "traddr": "10.0.0.2", 00:22:18.793 "adrfam": "ipv4", 00:22:18.793 "trsvcid": "4420", 00:22:18.793 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:18.793 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:18.793 "hdgst": false, 00:22:18.793 "ddgst": false 00:22:18.793 }, 00:22:18.793 "method": "bdev_nvme_attach_controller" 00:22:18.793 },{ 00:22:18.793 "params": { 00:22:18.793 "name": "Nvme8", 00:22:18.793 "trtype": "tcp", 00:22:18.793 "traddr": "10.0.0.2", 00:22:18.793 "adrfam": "ipv4", 00:22:18.793 "trsvcid": "4420", 00:22:18.793 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:18.793 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:18.793 "hdgst": false, 00:22:18.793 "ddgst": false 00:22:18.793 }, 00:22:18.793 "method": "bdev_nvme_attach_controller" 00:22:18.793 },{ 00:22:18.793 "params": { 00:22:18.793 "name": "Nvme9", 00:22:18.793 "trtype": "tcp", 00:22:18.793 "traddr": "10.0.0.2", 00:22:18.793 "adrfam": "ipv4", 00:22:18.793 "trsvcid": "4420", 00:22:18.793 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:18.793 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:18.793 "hdgst": false, 00:22:18.793 "ddgst": false 00:22:18.793 }, 00:22:18.793 "method": "bdev_nvme_attach_controller" 00:22:18.793 },{ 00:22:18.793 "params": { 00:22:18.793 "name": "Nvme10", 00:22:18.793 "trtype": "tcp", 00:22:18.793 "traddr": "10.0.0.2", 00:22:18.793 "adrfam": "ipv4", 00:22:18.793 "trsvcid": "4420", 00:22:18.793 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:18.793 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:18.793 "hdgst": false, 00:22:18.793 "ddgst": false 00:22:18.793 }, 00:22:18.793 "method": "bdev_nvme_attach_controller" 00:22:18.793 }' 00:22:19.053 [2024-12-09 11:57:26.695151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.053 [2024-12-09 11:57:26.731203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.437 Running I/O for 10 seconds... 00:22:20.437 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.437 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:20.437 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:20.437 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.437 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:20.698 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.698 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:20.698 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:20.698 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:20.698 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:20.698 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:20.698 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:20.698 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:20.698 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:20.698 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:20.698 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.698 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:20.698 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.698 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:20.698 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:20.698 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:20.958 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:20.958 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:20.958 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:20.958 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:20.958 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.958 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:20.958 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.958 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:20.958 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:20.958 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:21.219 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:21.219 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:21.219 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:21.219 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:21.219 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.219 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:21.219 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.219 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:21.219 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:21.219 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:21.219 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:21.219 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:21.219 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 110955 00:22:21.219 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 110955 ']' 00:22:21.219 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 110955 00:22:21.219 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:21.219 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.219 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110955 00:22:21.219 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:21.219 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:21.219 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110955' 00:22:21.219 killing process with pid 110955 00:22:21.219 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 110955 00:22:21.219 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 110955 00:22:21.480 2330.00 IOPS, 145.62 MiB/s [2024-12-09T10:57:29.366Z] Received shutdown signal, test time was about 1.019371 seconds 00:22:21.480 00:22:21.480 Latency(us) 00:22:21.480 [2024-12-09T10:57:29.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.480 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.480 Verification LBA range: start 0x0 length 0x400 00:22:21.480 Nvme1n1 : 1.00 257.05 16.07 0.00 0.00 245958.19 21517.65 255153.49 00:22:21.480 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.480 Verification LBA range: start 0x0 length 0x400 00:22:21.480 Nvme2n1 : 0.99 194.32 12.15 0.00 0.00 316627.91 21517.65 256901.12 00:22:21.480 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.480 Verification LBA range: start 0x0 length 0x400 00:22:21.480 Nvme3n1 : 1.00 256.27 16.02 0.00 0.00 236983.89 18568.53 244667.73 00:22:21.480 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.480 Verification LBA range: start 0x0 length 0x400 00:22:21.480 Nvme4n1 : 0.98 260.92 16.31 0.00 0.00 227933.23 18022.40 227191.47 00:22:21.480 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.480 Verification LBA range: start 0x0 length 0x400 00:22:21.480 Nvme5n1 : 0.96 199.41 12.46 0.00 0.00 291239.82 31675.73 249910.61 00:22:21.480 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.480 Verification LBA range: start 0x0 length 0x400 00:22:21.480 Nvme6n1 : 1.02 251.36 15.71 0.00 0.00 225236.48 6690.13 248162.99 00:22:21.480 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.480 Verification LBA range: start 0x0 length 0x400 00:22:21.480 Nvme7n1 : 0.98 264.71 16.54 0.00 0.00 208962.59 6990.51 217579.52 00:22:21.480 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.480 Verification LBA range: start 0x0 length 0x400 00:22:21.480 Nvme8n1 : 0.99 257.64 16.10 0.00 0.00 211503.79 19114.67 225443.84 00:22:21.480 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.480 Verification LBA range: start 0x0 length 0x400 00:22:21.481 Nvme9n1 : 0.98 260.02 16.25 0.00 0.00 204412.80 17039.36 269134.51 00:22:21.481 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.481 Verification LBA range: start 0x0 length 0x400 00:22:21.481 Nvme10n1 : 0.99 194.57 12.16 0.00 0.00 267136.28 18568.53 269134.51 00:22:21.481 [2024-12-09T10:57:29.367Z] =================================================================================================================== 00:22:21.481 [2024-12-09T10:57:29.367Z] Total : 2396.26 149.77 0.00 0.00 239650.21 6690.13 269134.51 00:22:21.481 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:22.867 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 110880 00:22:22.867 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:22.867 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:22.867 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:22.867 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:22.867 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:22.867 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:22.867 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # sync 00:22:22.867 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:22:22.867 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # set +e 00:22:22.867 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # for i in {1..20} 00:22:22.867 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:22:22.867 rmmod nvme_tcp 00:22:22.867 rmmod nvme_fabrics 00:22:22.867 rmmod nvme_keyring 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # set -e 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@130 -- # return 0 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@513 -- # '[' -n 110880 ']' 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # killprocess 110880 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 110880 ']' 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 110880 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110880 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110880' 00:22:22.868 killing process with pid 110880 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 110880 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 110880 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # iptr 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-save 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-restore 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # remove_spdk_ns 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.868 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:22:25.416 00:22:25.416 real 0m7.374s 00:22:25.416 user 0m21.935s 00:22:25.416 sys 0m1.232s 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.416 ************************************ 00:22:25.416 END TEST nvmf_shutdown_tc2 00:22:25.416 ************************************ 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:25.416 ************************************ 00:22:25.416 START TEST nvmf_shutdown_tc3 00:22:25.416 ************************************ 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # xtrace_disable 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:25.416 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_devs=() 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_devs 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_net_devs=() 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # pci_drivers=() 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # local -A pci_drivers 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # net_devs=() 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga net_devs 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # e810=() 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga e810 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # x722=() 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga x722 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # mlx=() 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # local -ga mlx 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:25.417 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:25.417 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:25.417 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:25.417 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # is_hw=yes 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:22:25.417 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:25.417 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:25.417 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:25.417 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:22:25.417 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:25.417 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:25.417 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:25.417 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:25.417 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:22:25.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:22:25.417 00:22:25.417 --- 10.0.0.2 ping statistics --- 00:22:25.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.417 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:22:25.418 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:25.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:22:25.418 00:22:25.418 --- 10.0.0.1 ping statistics --- 00:22:25.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.418 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:22:25.418 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.418 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # return 0 00:22:25.418 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:25.418 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.418 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:25.418 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:25.418 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.418 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:25.418 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:25.418 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:25.418 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:25.418 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:25.418 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:25.418 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # nvmfpid=112394 00:22:25.418 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # waitforlisten 112394 00:22:25.418 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:25.418 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 112394 ']' 00:22:25.418 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.418 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:25.418 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.418 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:25.418 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:25.679 [2024-12-09 11:57:33.303014] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:22:25.679 [2024-12-09 11:57:33.303085] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.679 [2024-12-09 11:57:33.396815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:25.679 [2024-12-09 11:57:33.430848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.679 [2024-12-09 11:57:33.430883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.679 [2024-12-09 11:57:33.430888] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.679 [2024-12-09 11:57:33.430893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.679 [2024-12-09 11:57:33.430897] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.679 [2024-12-09 11:57:33.432188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.679 [2024-12-09 11:57:33.432346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:25.679 [2024-12-09 11:57:33.432502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.679 [2024-12-09 11:57:33.432505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:26.251 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:26.251 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:26.251 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:26.251 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:26.251 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:26.512 [2024-12-09 11:57:34.152311] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:26.512 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.513 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:26.513 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:26.513 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.513 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:26.513 Malloc1 00:22:26.513 [2024-12-09 11:57:34.261301] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.513 Malloc2 00:22:26.513 Malloc3 00:22:26.513 Malloc4 00:22:26.513 Malloc5 00:22:26.773 Malloc6 00:22:26.773 Malloc7 00:22:26.773 Malloc8 00:22:26.773 Malloc9 00:22:26.773 Malloc10 00:22:26.773 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.773 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:26.773 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:26.773 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:26.773 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=112772 00:22:26.773 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 112772 /var/tmp/bdevperf.sock 00:22:26.773 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 112772 ']' 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:27.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # config=() 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # local subsystem config 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:27.035 { 00:22:27.035 "params": { 00:22:27.035 "name": "Nvme$subsystem", 00:22:27.035 "trtype": "$TEST_TRANSPORT", 00:22:27.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.035 "adrfam": "ipv4", 00:22:27.035 "trsvcid": "$NVMF_PORT", 00:22:27.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.035 "hdgst": ${hdgst:-false}, 00:22:27.035 "ddgst": ${ddgst:-false} 00:22:27.035 }, 00:22:27.035 "method": "bdev_nvme_attach_controller" 00:22:27.035 } 00:22:27.035 EOF 00:22:27.035 )") 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:27.035 { 00:22:27.035 "params": { 00:22:27.035 "name": "Nvme$subsystem", 00:22:27.035 "trtype": "$TEST_TRANSPORT", 00:22:27.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.035 "adrfam": "ipv4", 00:22:27.035 "trsvcid": "$NVMF_PORT", 00:22:27.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.035 "hdgst": ${hdgst:-false}, 00:22:27.035 "ddgst": ${ddgst:-false} 00:22:27.035 }, 00:22:27.035 "method": "bdev_nvme_attach_controller" 00:22:27.035 } 00:22:27.035 EOF 00:22:27.035 )") 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:27.035 { 00:22:27.035 "params": { 00:22:27.035 "name": "Nvme$subsystem", 00:22:27.035 "trtype": "$TEST_TRANSPORT", 00:22:27.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.035 "adrfam": "ipv4", 00:22:27.035 "trsvcid": "$NVMF_PORT", 00:22:27.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.035 "hdgst": ${hdgst:-false}, 00:22:27.035 "ddgst": ${ddgst:-false} 00:22:27.035 }, 00:22:27.035 "method": "bdev_nvme_attach_controller" 00:22:27.035 } 00:22:27.035 EOF 00:22:27.035 )") 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:27.035 { 00:22:27.035 "params": { 00:22:27.035 "name": "Nvme$subsystem", 00:22:27.035 "trtype": "$TEST_TRANSPORT", 00:22:27.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.035 "adrfam": "ipv4", 00:22:27.035 "trsvcid": "$NVMF_PORT", 00:22:27.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.035 "hdgst": ${hdgst:-false}, 00:22:27.035 "ddgst": ${ddgst:-false} 00:22:27.035 }, 00:22:27.035 "method": "bdev_nvme_attach_controller" 00:22:27.035 } 00:22:27.035 EOF 00:22:27.035 )") 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:27.035 { 00:22:27.035 "params": { 00:22:27.035 "name": "Nvme$subsystem", 00:22:27.035 "trtype": "$TEST_TRANSPORT", 00:22:27.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.035 "adrfam": "ipv4", 00:22:27.035 "trsvcid": "$NVMF_PORT", 00:22:27.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.035 "hdgst": ${hdgst:-false}, 00:22:27.035 "ddgst": ${ddgst:-false} 00:22:27.035 }, 00:22:27.035 "method": "bdev_nvme_attach_controller" 00:22:27.035 } 00:22:27.035 EOF 00:22:27.035 )") 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:27.035 { 00:22:27.035 "params": { 00:22:27.035 "name": "Nvme$subsystem", 00:22:27.035 "trtype": "$TEST_TRANSPORT", 00:22:27.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.035 "adrfam": "ipv4", 00:22:27.035 "trsvcid": "$NVMF_PORT", 00:22:27.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.035 "hdgst": ${hdgst:-false}, 00:22:27.035 "ddgst": ${ddgst:-false} 00:22:27.035 }, 00:22:27.035 "method": "bdev_nvme_attach_controller" 00:22:27.035 } 00:22:27.035 EOF 00:22:27.035 )") 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:27.035 [2024-12-09 11:57:34.706894] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:22:27.035 [2024-12-09 11:57:34.706947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112772 ] 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:27.035 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:27.035 { 00:22:27.035 "params": { 00:22:27.035 "name": "Nvme$subsystem", 00:22:27.035 "trtype": "$TEST_TRANSPORT", 00:22:27.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.035 "adrfam": "ipv4", 00:22:27.035 "trsvcid": "$NVMF_PORT", 00:22:27.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.035 "hdgst": ${hdgst:-false}, 00:22:27.035 "ddgst": ${ddgst:-false} 00:22:27.035 }, 00:22:27.035 "method": "bdev_nvme_attach_controller" 00:22:27.035 } 00:22:27.036 EOF 00:22:27.036 )") 00:22:27.036 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:27.036 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:27.036 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:27.036 { 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme$subsystem", 00:22:27.036 "trtype": "$TEST_TRANSPORT", 00:22:27.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "$NVMF_PORT", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.036 "hdgst": ${hdgst:-false}, 00:22:27.036 "ddgst": ${ddgst:-false} 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 } 00:22:27.036 EOF 00:22:27.036 )") 00:22:27.036 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:27.036 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:27.036 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:27.036 { 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme$subsystem", 00:22:27.036 "trtype": "$TEST_TRANSPORT", 00:22:27.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "$NVMF_PORT", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.036 "hdgst": ${hdgst:-false}, 00:22:27.036 "ddgst": ${ddgst:-false} 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 } 00:22:27.036 EOF 00:22:27.036 )") 00:22:27.036 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:27.036 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:27.036 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:27.036 { 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme$subsystem", 00:22:27.036 "trtype": "$TEST_TRANSPORT", 00:22:27.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "$NVMF_PORT", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.036 "hdgst": ${hdgst:-false}, 00:22:27.036 "ddgst": ${ddgst:-false} 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 } 00:22:27.036 EOF 00:22:27.036 )") 00:22:27.036 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:27.036 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # jq . 00:22:27.036 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@581 -- # IFS=, 00:22:27.036 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme1", 00:22:27.036 "trtype": "tcp", 00:22:27.036 "traddr": "10.0.0.2", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "4420", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:27.036 "hdgst": false, 00:22:27.036 "ddgst": false 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 },{ 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme2", 00:22:27.036 "trtype": "tcp", 00:22:27.036 "traddr": "10.0.0.2", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "4420", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:27.036 "hdgst": false, 00:22:27.036 "ddgst": false 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 },{ 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme3", 00:22:27.036 "trtype": "tcp", 00:22:27.036 "traddr": "10.0.0.2", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "4420", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:27.036 "hdgst": false, 00:22:27.036 "ddgst": false 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 },{ 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme4", 00:22:27.036 "trtype": "tcp", 00:22:27.036 "traddr": "10.0.0.2", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "4420", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:27.036 "hdgst": false, 00:22:27.036 "ddgst": false 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 },{ 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme5", 00:22:27.036 "trtype": "tcp", 00:22:27.036 "traddr": "10.0.0.2", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "4420", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:27.036 "hdgst": false, 00:22:27.036 "ddgst": false 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 },{ 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme6", 00:22:27.036 "trtype": "tcp", 00:22:27.036 "traddr": "10.0.0.2", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "4420", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:27.036 "hdgst": false, 00:22:27.036 "ddgst": false 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 },{ 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme7", 00:22:27.036 "trtype": "tcp", 00:22:27.036 "traddr": "10.0.0.2", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "4420", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:27.036 "hdgst": false, 00:22:27.036 "ddgst": false 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 },{ 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme8", 00:22:27.036 "trtype": "tcp", 00:22:27.036 "traddr": "10.0.0.2", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "4420", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:27.036 "hdgst": false, 00:22:27.036 "ddgst": false 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 },{ 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme9", 00:22:27.036 "trtype": "tcp", 00:22:27.036 "traddr": "10.0.0.2", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "4420", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:27.036 "hdgst": false, 00:22:27.036 "ddgst": false 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 },{ 00:22:27.036 "params": { 00:22:27.036 "name": "Nvme10", 00:22:27.036 "trtype": "tcp", 00:22:27.036 "traddr": "10.0.0.2", 00:22:27.036 "adrfam": "ipv4", 00:22:27.036 "trsvcid": "4420", 00:22:27.036 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:27.036 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:27.036 "hdgst": false, 00:22:27.036 "ddgst": false 00:22:27.036 }, 00:22:27.036 "method": "bdev_nvme_attach_controller" 00:22:27.036 }' 00:22:27.036 [2024-12-09 11:57:34.796562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.036 [2024-12-09 11:57:34.832740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.419 Running I/O for 10 seconds... 00:22:28.679 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:28.679 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:28.679 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:28.679 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.679 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:28.679 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.679 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:28.679 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:28.679 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:28.679 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:28.679 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:28.679 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:28.679 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:28.679 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:28.679 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:28.679 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:28.679 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.679 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:28.679 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.679 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:28.679 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:28.679 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:28.939 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:28.939 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:28.939 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:28.939 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:28.939 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.939 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:29.199 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.199 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:29.199 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:29.199 11:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:29.474 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:29.474 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:29.474 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:29.474 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:29.474 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.474 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:29.474 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.474 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:29.474 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:29.474 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:29.474 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:29.474 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:29.474 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 112394 00:22:29.474 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 112394 ']' 00:22:29.474 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 112394 00:22:29.474 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:22:29.474 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:29.474 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112394 00:22:29.474 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:29.474 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:29.474 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112394' 00:22:29.474 killing process with pid 112394 00:22:29.474 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 112394 00:22:29.474 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 112394 00:22:29.474 [2024-12-09 11:57:37.224751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.224997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.225002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.225006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.225010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.225015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.225021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.225025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.225030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.225034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.225039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.474 [2024-12-09 11:57:37.225044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.225048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.225053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.225057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.225062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.225066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.225071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.225075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.225080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.225084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.225089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.225093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.225097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.225102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.225108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53da0 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.226477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56970 is same with the state(6) to be set 00:22:29.475 [2024-12-09 11:57:37.227858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.475 [2024-12-09 11:57:37.227894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.475 [2024-12-09 11:57:37.227912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.475 [2024-12-09 11:57:37.227920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.475 [2024-12-09 11:57:37.227930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.475 [2024-12-09 11:57:37.227938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.475 [2024-12-09 11:57:37.227948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.475 [2024-12-09 11:57:37.227956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.227965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.227973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.227983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.227990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf54760 is same with the state(6) to be set 00:22:29.476 [2024-12-09 11:57:37.228255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf54760 is same with t[2024-12-09 11:57:37.228264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:22:29.476 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.476 [2024-12-09 11:57:37.228620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.476 [2024-12-09 11:57:37.228630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.477 [2024-12-09 11:57:37.228644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.477 [2024-12-09 11:57:37.228654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.477 [2024-12-09 11:57:37.228661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.477 [2024-12-09 11:57:37.228670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.477 [2024-12-09 11:57:37.228677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.477 [2024-12-09 11:57:37.228687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.477 [2024-12-09 11:57:37.228694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.477 [2024-12-09 11:57:37.228703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.477 [2024-12-09 11:57:37.228711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.477 [2024-12-09 11:57:37.228721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.477 [2024-12-09 11:57:37.228729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.477 [2024-12-09 11:57:37.228739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.477 [2024-12-09 11:57:37.228746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.477 [2024-12-09 11:57:37.228755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.477 [2024-12-09 11:57:37.228763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.477 [2024-12-09 11:57:37.228772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.477 [2024-12-09 11:57:37.228779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.477 [2024-12-09 11:57:37.228788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.477 [2024-12-09 11:57:37.228795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.477 [2024-12-09 11:57:37.228805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.477 [2024-12-09 11:57:37.228812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.477 [2024-12-09 11:57:37.228821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.477 [2024-12-09 11:57:37.228829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.477 [2024-12-09 11:57:37.228839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.477 [2024-12-09 11:57:37.228846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.477 [2024-12-09 11:57:37.228855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.477 [2024-12-09 11:57:37.228863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.477 [2024-12-09 11:57:37.228872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.477 [2024-12-09 11:57:37.228879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.477 [2024-12-09 11:57:37.228889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.477 [2024-12-09 11:57:37.228896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.477 [2024-12-09 11:57:37.228905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.477 [2024-12-09 11:57:37.228912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.477 [2024-12-09 11:57:37.228922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.477 [2024-12-09 11:57:37.228929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.477 [2024-12-09 11:57:37.228940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.477 [2024-12-09 11:57:37.228947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.477 [2024-12-09 11:57:37.228957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.477 [2024-12-09 11:57:37.228964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.477 [2024-12-09 11:57:37.228961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf54c50 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.228974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.477 [2024-12-09 11:57:37.228981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.477 [2024-12-09 11:57:37.229500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.477 [2024-12-09 11:57:37.229699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.229704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.229708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.229713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.229717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.229722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.229727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.229731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.229738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.229742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.229747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.229751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.229756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.229761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.229765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.229770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.229774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.229779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55120 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.478 [2024-12-09 11:57:37.230756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.230760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.230765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.230770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.230775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.230779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.230784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.230789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf555f0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.231843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55ac0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.232487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.232506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.232512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.232516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.232521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.232526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.232531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.232536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.232541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.232546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.232551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.232555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.232560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.232565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.232569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.232574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.479 [2024-12-09 11:57:37.232579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.232790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55fb0 is same with the state(6) to be set 00:22:29.480 [2024-12-09 11:57:37.233466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.480 [2024-12-09 11:57:37.233490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.480 [2024-12-09 11:57:37.233504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.480 [2024-12-09 11:57:37.233513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.480 [2024-12-09 11:57:37.233524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.480 [2024-12-09 11:57:37.233533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.480 [2024-12-09 11:57:37.233544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.480 [2024-12-09 11:57:37.233554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.480 [2024-12-09 11:57:37.233565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.480 [2024-12-09 11:57:37.233574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.480 [2024-12-09 11:57:37.233585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.480 [2024-12-09 11:57:37.233593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.480 [2024-12-09 11:57:37.233605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.480 [2024-12-09 11:57:37.233614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.480 [2024-12-09 11:57:37.233625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.480 [2024-12-09 11:57:37.233634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.480 [2024-12-09 11:57:37.233650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.480 [2024-12-09 11:57:37.233660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.480 [2024-12-09 11:57:37.233671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.480 [2024-12-09 11:57:37.233679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.480 [2024-12-09 11:57:37.233693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.480 [2024-12-09 11:57:37.233701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.480 [2024-12-09 11:57:37.233710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.480 [2024-12-09 11:57:37.233718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.480 [2024-12-09 11:57:37.233728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.480 [2024-12-09 11:57:37.233735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.480 [2024-12-09 11:57:37.233745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.480 [2024-12-09 11:57:37.233752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.480 [2024-12-09 11:57:37.233762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.480 [2024-12-09 11:57:37.233769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.480 [2024-12-09 11:57:37.233779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.480 [2024-12-09 11:57:37.233786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.480 [2024-12-09 11:57:37.233795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.480 [2024-12-09 11:57:37.233802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.480 [2024-12-09 11:57:37.233812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.480 [2024-12-09 11:57:37.233820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.480 [2024-12-09 11:57:37.233829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.480 [2024-12-09 11:57:37.233836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.480 [2024-12-09 11:57:37.233846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.480 [2024-12-09 11:57:37.233853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.480 [2024-12-09 11:57:37.233863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.233870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.233880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.233887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.233896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.233905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.233916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.233923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.233932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.233939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.233949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.233956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.233966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.233973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.233982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.233989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.233999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.481 [2024-12-09 11:57:37.234527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.481 [2024-12-09 11:57:37.234534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.234545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.234552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.234561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.234571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.234580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.234588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.234597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.234604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.234770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.234783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.234795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.234802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.234811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.234819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.234828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.234836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.234845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.234852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.234861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.234869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.234878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.234885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.234895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.234902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.234911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.234924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.234933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.234941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.234950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.234957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.234966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.234974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.234983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.234991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.235000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.235008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.235017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.235024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.235033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.235040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.235050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.235057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.235066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.235073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.235083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.235090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.235100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.235107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.235116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.235123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.235135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.235142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.235151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.235158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.235168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.235175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.235184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.235192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.235201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.235208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.235217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.235225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.235234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.235241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.235304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.235351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.235407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.235454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.235507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.235553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.235609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.235668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.235723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.482 [2024-12-09 11:57:37.235769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.482 [2024-12-09 11:57:37.235824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.235876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.235931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.235978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.236031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.236083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.236136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.236183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.236237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.236284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.236337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.236383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.236443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.236492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.236545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.236591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.236647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.236698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.236753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.236799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.236859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.236905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.236960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.237006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.237061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.237107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.237159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.237213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.237272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.237319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.237371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.237424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.237477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.237530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.237589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.237645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.237709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.237755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.237808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.237855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.237909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.237955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.238014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.238061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.238117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.238163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.238216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.238262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.238316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.238368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.238421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.238467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.238520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.238566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.238619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.238680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.238739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.238786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.238840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.238886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.238939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.238993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.239328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.239347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.239360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.239368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.239378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.239385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.239395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.239421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.239473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.239520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.239575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.239621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.483 [2024-12-09 11:57:37.239690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.483 [2024-12-09 11:57:37.239745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.239797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.239846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.239900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.239949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.240000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.240048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.240107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.240154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.240205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.256493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.256556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.256569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.256581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.256590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.256603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.256612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.256624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.256633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.256654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.256664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.256676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.256685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.256697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.256706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.256718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.256734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.256748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.256757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.256769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.256778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.256790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.256799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.256811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.256820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.256832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.256841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.256854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.256863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.256875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.256884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.256896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.256905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.256917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.256927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.256939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.256948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.256960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.256969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.256981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.256990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.257004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.257015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.257027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.257036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.257048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.257057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.257069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.257078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.257090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.257100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.257112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.257121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.257133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.257143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.257155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.257164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.257176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.257185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.257197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.257206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.257218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.257227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.257238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.257248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.257260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.257271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.257282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.257291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.257303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.484 [2024-12-09 11:57:37.257314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.484 [2024-12-09 11:57:37.257326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.485 [2024-12-09 11:57:37.257335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.257348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.485 [2024-12-09 11:57:37.257357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.257369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.485 [2024-12-09 11:57:37.257378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.257390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.485 [2024-12-09 11:57:37.257399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.257410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.485 [2024-12-09 11:57:37.257420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.257432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.485 [2024-12-09 11:57:37.257441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.257453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.485 [2024-12-09 11:57:37.257462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.257474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.485 [2024-12-09 11:57:37.257483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.257495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.485 [2024-12-09 11:57:37.257504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.257516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.485 [2024-12-09 11:57:37.257526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.257539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.485 [2024-12-09 11:57:37.257548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.257560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.485 [2024-12-09 11:57:37.257569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.257581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.485 [2024-12-09 11:57:37.257591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.257602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.485 [2024-12-09 11:57:37.257612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.257623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.485 [2024-12-09 11:57:37.257632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.257650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.485 [2024-12-09 11:57:37.257659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.257670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.485 [2024-12-09 11:57:37.257680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.257872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:29.485 [2024-12-09 11:57:37.257975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130f460 (9): Bad file descriptor 00:22:29.485 [2024-12-09 11:57:37.258015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.485 [2024-12-09 11:57:37.258027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.258038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.485 [2024-12-09 11:57:37.258047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.258058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.485 [2024-12-09 11:57:37.258067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.258078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.485 [2024-12-09 11:57:37.258087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.258096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f8d0 is same with the state(6) to be set 00:22:29.485 [2024-12-09 11:57:37.258132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.485 [2024-12-09 11:57:37.258149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.258159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.485 [2024-12-09 11:57:37.258168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.258178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.485 [2024-12-09 11:57:37.258188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.258198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.485 [2024-12-09 11:57:37.258207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.258216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130dbd0 is same with the state(6) to be set 00:22:29.485 [2024-12-09 11:57:37.258243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.485 [2024-12-09 11:57:37.258254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.258264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.485 [2024-12-09 11:57:37.258273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.258284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.485 [2024-12-09 11:57:37.258293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.258303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.485 [2024-12-09 11:57:37.258312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.258321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1760c70 is same with the state(6) to be set 00:22:29.485 [2024-12-09 11:57:37.258356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.485 [2024-12-09 11:57:37.258367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.258378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.485 [2024-12-09 11:57:37.258387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.258397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.485 [2024-12-09 11:57:37.258406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.258416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.485 [2024-12-09 11:57:37.258425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.258436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130e960 is same with the state(6) to be set 00:22:29.485 [2024-12-09 11:57:37.258468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.485 [2024-12-09 11:57:37.258479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.258489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.485 [2024-12-09 11:57:37.258499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.258510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.485 [2024-12-09 11:57:37.258519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.258530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.485 [2024-12-09 11:57:37.258539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.485 [2024-12-09 11:57:37.258548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1227610 is same with the state(6) to be set 00:22:29.485 [2024-12-09 11:57:37.258578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.485 [2024-12-09 11:57:37.258589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.258599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.486 [2024-12-09 11:57:37.258608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.258619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.486 [2024-12-09 11:57:37.258628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.258647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.486 [2024-12-09 11:57:37.258657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.258666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177cba0 is same with the state(6) to be set 00:22:29.486 [2024-12-09 11:57:37.258698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.486 [2024-12-09 11:57:37.258710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.258720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.486 [2024-12-09 11:57:37.258729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.258740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.486 [2024-12-09 11:57:37.258749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.258759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.486 [2024-12-09 11:57:37.258771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.258780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17336c0 is same with the state(6) to be set 00:22:29.486 [2024-12-09 11:57:37.258812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.486 [2024-12-09 11:57:37.258824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.258834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.486 [2024-12-09 11:57:37.258844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.258854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.486 [2024-12-09 11:57:37.258864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.258874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.486 [2024-12-09 11:57:37.258883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.258893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769fd0 is same with the state(6) to be set 00:22:29.486 [2024-12-09 11:57:37.258916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.486 [2024-12-09 11:57:37.258926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.258937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.486 [2024-12-09 11:57:37.258946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.258956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.486 [2024-12-09 11:57:37.258966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.258976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.486 [2024-12-09 11:57:37.258985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.258995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130bc90 is same with the state(6) to be set 00:22:29.486 [2024-12-09 11:57:37.263964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:29.486 [2024-12-09 11:57:37.264003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:29.486 [2024-12-09 11:57:37.264025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130dbd0 (9): Bad file descriptor 00:22:29.486 [2024-12-09 11:57:37.264040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130bc90 (9): Bad file descriptor 00:22:29.486 [2024-12-09 11:57:37.264115] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:29.486 [2024-12-09 11:57:37.264738] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:29.486 [2024-12-09 11:57:37.264767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:29.486 [2024-12-09 11:57:37.264794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1769fd0 (9): Bad file descriptor 00:22:29.486 [2024-12-09 11:57:37.265278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.486 [2024-12-09 11:57:37.265298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130f460 with addr=10.0.0.2, port=4420 00:22:29.486 [2024-12-09 11:57:37.265309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f460 is same with the state(6) to be set 00:22:29.486 [2024-12-09 11:57:37.266095] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:29.486 [2024-12-09 11:57:37.266951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.486 [2024-12-09 11:57:37.266995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130bc90 with addr=10.0.0.2, port=4420 00:22:29.486 [2024-12-09 11:57:37.267007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130bc90 is same with the state(6) to be set 00:22:29.486 [2024-12-09 11:57:37.267244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.486 [2024-12-09 11:57:37.267257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130dbd0 with addr=10.0.0.2, port=4420 00:22:29.486 [2024-12-09 11:57:37.267265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130dbd0 is same with the state(6) to be set 00:22:29.486 [2024-12-09 11:57:37.267295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130f460 (9): Bad file descriptor 00:22:29.486 [2024-12-09 11:57:37.267380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.486 [2024-12-09 11:57:37.267395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.267412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.486 [2024-12-09 11:57:37.267421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.267432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.486 [2024-12-09 11:57:37.267440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.267451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.486 [2024-12-09 11:57:37.267459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.267470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.486 [2024-12-09 11:57:37.267479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.267489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.486 [2024-12-09 11:57:37.267497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.267508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.486 [2024-12-09 11:57:37.267516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.267526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.486 [2024-12-09 11:57:37.267540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.267550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.486 [2024-12-09 11:57:37.267558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.267569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.486 [2024-12-09 11:57:37.267576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.267587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.486 [2024-12-09 11:57:37.267595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.267605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.486 [2024-12-09 11:57:37.267613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.267624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.486 [2024-12-09 11:57:37.267632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.267650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.486 [2024-12-09 11:57:37.267658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.267669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.486 [2024-12-09 11:57:37.267677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.486 [2024-12-09 11:57:37.267687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.486 [2024-12-09 11:57:37.267695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.267705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.267713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.267723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.267732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.267742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.267750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.267761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.267769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.267785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.267794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.267804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.267812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.267822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.267831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.267841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.267849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.267860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.267868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.267878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.267886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.267897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.267905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.267915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.267924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.267934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.267942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.267953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.267961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.267971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.267979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.267989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.267997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.268008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.268018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.268029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.268037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.268047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.268055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.268066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.268074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.268084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.268092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.268103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.268111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.268121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.268129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.268140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.268147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.268158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.268166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.268176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.268184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.268194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.268202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.268213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.268221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.268231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.268239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.268251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.268259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.268270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.268278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.268288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.268296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.268307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.268315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.268325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.268333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.268343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.268352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.487 [2024-12-09 11:57:37.268362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.487 [2024-12-09 11:57:37.268370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.268381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.268389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.268399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.268407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.268418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.268426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.268436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.268445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.268456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.268464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.268474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.268484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.268494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.268502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.268513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.268521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.268532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.268539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.268550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.268558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.268568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.268576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.268586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.268594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.268603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711bb0 is same with the state(6) to be set 00:22:29.488 [2024-12-09 11:57:37.268715] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:29.488 [2024-12-09 11:57:37.268760] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:29.488 [2024-12-09 11:57:37.268995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.488 [2024-12-09 11:57:37.269012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1769fd0 with addr=10.0.0.2, port=4420 00:22:29.488 [2024-12-09 11:57:37.269020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769fd0 is same with the state(6) to be set 00:22:29.488 [2024-12-09 11:57:37.269032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130bc90 (9): Bad file descriptor 00:22:29.488 [2024-12-09 11:57:37.269043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130dbd0 (9): Bad file descriptor 00:22:29.488 [2024-12-09 11:57:37.269053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:29.488 [2024-12-09 11:57:37.269061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:29.488 [2024-12-09 11:57:37.269071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:29.488 [2024-12-09 11:57:37.269081] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:29.488 [2024-12-09 11:57:37.269094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130f8d0 (9): Bad file descriptor 00:22:29.488 [2024-12-09 11:57:37.269117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1760c70 (9): Bad file descriptor 00:22:29.488 [2024-12-09 11:57:37.269144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130e960 (9): Bad file descriptor 00:22:29.488 [2024-12-09 11:57:37.269160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1227610 (9): Bad file descriptor 00:22:29.488 [2024-12-09 11:57:37.269179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177cba0 (9): Bad file descriptor 00:22:29.488 [2024-12-09 11:57:37.269198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17336c0 (9): Bad file descriptor 00:22:29.488 [2024-12-09 11:57:37.270614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.270633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.270655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.270666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.270678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.270688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.270700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.270710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.270722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.270732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.270744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.270754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.270765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.270773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.270784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.270792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.270802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.270810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.270820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.270828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.270839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.270847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.270861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.270869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.270879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.270888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.270898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.270906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.270917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.270925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.270936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.488 [2024-12-09 11:57:37.270944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.488 [2024-12-09 11:57:37.270955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.270963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.270973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.270981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.270991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.270999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.489 [2024-12-09 11:57:37.271722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.489 [2024-12-09 11:57:37.271733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.490 [2024-12-09 11:57:37.271741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.490 [2024-12-09 11:57:37.271751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.490 [2024-12-09 11:57:37.271759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.490 [2024-12-09 11:57:37.271770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.490 [2024-12-09 11:57:37.271778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.490 [2024-12-09 11:57:37.271788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.490 [2024-12-09 11:57:37.271796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.490 [2024-12-09 11:57:37.271809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.490 [2024-12-09 11:57:37.271817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.490 [2024-12-09 11:57:37.271827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.490 [2024-12-09 11:57:37.271835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.490 [2024-12-09 11:57:37.271846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.490 [2024-12-09 11:57:37.271854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.490 [2024-12-09 11:57:37.271863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1715400 is same with the state(6) to be set 00:22:29.490 [2024-12-09 11:57:37.271942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:29.490 [2024-12-09 11:57:37.271978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1769fd0 (9): Bad file descriptor 00:22:29.490 [2024-12-09 11:57:37.271989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:29.490 [2024-12-09 11:57:37.271997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:29.490 [2024-12-09 11:57:37.272006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:29.490 [2024-12-09 11:57:37.272014] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:29.490 [2024-12-09 11:57:37.272022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:29.490 [2024-12-09 11:57:37.272030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:29.490 [2024-12-09 11:57:37.272037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:29.490 [2024-12-09 11:57:37.272045] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:29.490 [2024-12-09 11:57:37.273406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:29.490 [2024-12-09 11:57:37.273708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.490 [2024-12-09 11:57:37.273725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130e960 with addr=10.0.0.2, port=4420 00:22:29.490 [2024-12-09 11:57:37.273734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130e960 is same with the state(6) to be set 00:22:29.490 [2024-12-09 11:57:37.273742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:29.490 [2024-12-09 11:57:37.273749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:29.490 [2024-12-09 11:57:37.273757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:29.490 [2024-12-09 11:57:37.273764] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:29.490 [2024-12-09 11:57:37.274413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.490 [2024-12-09 11:57:37.274428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x177cba0 with addr=10.0.0.2, port=4420 00:22:29.490 [2024-12-09 11:57:37.274436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177cba0 is same with the state(6) to be set 00:22:29.490 [2024-12-09 11:57:37.274450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130e960 (9): Bad file descriptor 00:22:29.490 [2024-12-09 11:57:37.274793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177cba0 (9): Bad file descriptor 00:22:29.490 [2024-12-09 11:57:37.274822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:29.490 [2024-12-09 11:57:37.274829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:29.490 [2024-12-09 11:57:37.274837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:29.490 [2024-12-09 11:57:37.274844] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:29.490 [2024-12-09 11:57:37.274891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:29.490 [2024-12-09 11:57:37.274909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:29.490 [2024-12-09 11:57:37.274916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:29.490 [2024-12-09 11:57:37.274924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:29.490 [2024-12-09 11:57:37.274931] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:29.490 [2024-12-09 11:57:37.275274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.490 [2024-12-09 11:57:37.275287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130f460 with addr=10.0.0.2, port=4420 00:22:29.490 [2024-12-09 11:57:37.275295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f460 is same with the state(6) to be set 00:22:29.490 [2024-12-09 11:57:37.275328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130f460 (9): Bad file descriptor 00:22:29.490 [2024-12-09 11:57:37.275360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:29.490 [2024-12-09 11:57:37.275367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:29.490 [2024-12-09 11:57:37.275375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:29.490 [2024-12-09 11:57:37.275383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:29.490 [2024-12-09 11:57:37.275461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:29.490 [2024-12-09 11:57:37.275472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:29.490 [2024-12-09 11:57:37.275729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.490 [2024-12-09 11:57:37.275744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130dbd0 with addr=10.0.0.2, port=4420 00:22:29.490 [2024-12-09 11:57:37.275752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130dbd0 is same with the state(6) to be set 00:22:29.490 [2024-12-09 11:57:37.276094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.490 [2024-12-09 11:57:37.276105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130bc90 with addr=10.0.0.2, port=4420 00:22:29.490 [2024-12-09 11:57:37.276113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130bc90 is same with the state(6) to be set 00:22:29.490 [2024-12-09 11:57:37.276146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130dbd0 (9): Bad file descriptor 00:22:29.490 [2024-12-09 11:57:37.276156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130bc90 (9): Bad file descriptor 00:22:29.490 [2024-12-09 11:57:37.276192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:29.490 [2024-12-09 11:57:37.276200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:29.490 [2024-12-09 11:57:37.276207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:29.490 [2024-12-09 11:57:37.276214] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:29.490 [2024-12-09 11:57:37.276222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:29.490 [2024-12-09 11:57:37.276229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:29.490 [2024-12-09 11:57:37.276237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:29.490 [2024-12-09 11:57:37.276243] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:29.490 [2024-12-09 11:57:37.277399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:29.490 [2024-12-09 11:57:37.277740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.490 [2024-12-09 11:57:37.277755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1769fd0 with addr=10.0.0.2, port=4420 00:22:29.490 [2024-12-09 11:57:37.277763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769fd0 is same with the state(6) to be set 00:22:29.490 [2024-12-09 11:57:37.277806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1769fd0 (9): Bad file descriptor 00:22:29.490 [2024-12-09 11:57:37.277840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:29.490 [2024-12-09 11:57:37.277847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:29.490 [2024-12-09 11:57:37.277855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:29.490 [2024-12-09 11:57:37.277861] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:29.490 [2024-12-09 11:57:37.278907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.490 [2024-12-09 11:57:37.278920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.490 [2024-12-09 11:57:37.278932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.490 [2024-12-09 11:57:37.278939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.490 [2024-12-09 11:57:37.278949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.490 [2024-12-09 11:57:37.278956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.490 [2024-12-09 11:57:37.278966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.490 [2024-12-09 11:57:37.278973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.278982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.278990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.491 [2024-12-09 11:57:37.279670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.491 [2024-12-09 11:57:37.279679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.279687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.279697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.279704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.279714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.279722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.279731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.279739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.279749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.279756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.279765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.279773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.279782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.279790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.279799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.279806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.279816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.279823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.279833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.279840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.279850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.279857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.279867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.279878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.279888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.279896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.279905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.279913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.279922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.279930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.279939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.279947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.279957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.279964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.279974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.279981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.279991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.279998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.280007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.280014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.280023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1513840 is same with the state(6) to be set 00:22:29.492 [2024-12-09 11:57:37.281304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.281318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.281331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.281340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.281352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.281361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.281371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.281381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.281391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.281398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.281408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.281416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.281426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.281433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.281443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.281450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.281460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.281468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.281477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.281484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.281494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.281501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.281511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.281518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.281528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.281535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.281545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.281553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.281562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.492 [2024-12-09 11:57:37.281569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.492 [2024-12-09 11:57:37.281579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.281987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.281997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.282004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.282014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.282021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.282033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.282040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.282050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.282057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.282067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.282075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.282084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.282091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.282101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.282108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.282117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.282125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.282135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.282142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.282151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.282159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.282168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.282176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.282185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.282193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.282202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.282210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.282219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.282228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.282238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.282247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.282256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.282264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.282274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.493 [2024-12-09 11:57:37.282281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.493 [2024-12-09 11:57:37.282291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.282298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.282308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.282315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.282325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.282332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.282342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.282349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.282359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.282366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.282375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.282383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.282392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.282399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.282409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.282416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.282424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1712e50 is same with the state(6) to be set 00:22:29.494 [2024-12-09 11:57:37.283706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.283720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.283732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.283741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.283755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.283765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.283776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.283786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.283795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.283803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.283813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.283820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.283830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.283837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.283847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.283854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.283864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.283872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.283881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.283889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.283898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.283906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.283916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.283923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.283933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.283940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.283950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.283957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.283967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.283976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.283986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.283994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.284004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.284011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.284021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.284028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.284037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.284045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.284054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.284062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.284072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.284079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.284089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.284096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.284106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.284113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.284123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.284130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.284140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.284147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.284156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.284164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.284173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.284181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.284192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.284199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.284208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.284216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.284226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.284233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.284243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.284251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.284260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.494 [2024-12-09 11:57:37.284268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.494 [2024-12-09 11:57:37.284277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.284831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.284839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17140f0 is same with the state(6) to be set 00:22:29.495 [2024-12-09 11:57:37.286116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.286129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.286143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.286153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.286164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.286173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.286184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.286192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.286202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.286210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.286219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.286227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.286237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.286244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.286254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.286261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.495 [2024-12-09 11:57:37.286271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.495 [2024-12-09 11:57:37.286278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.496 [2024-12-09 11:57:37.286932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.496 [2024-12-09 11:57:37.286940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.497 [2024-12-09 11:57:37.286949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.497 [2024-12-09 11:57:37.286956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.497 [2024-12-09 11:57:37.286966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.497 [2024-12-09 11:57:37.286974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.497 [2024-12-09 11:57:37.286984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.497 [2024-12-09 11:57:37.286991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.497 [2024-12-09 11:57:37.287002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.497 [2024-12-09 11:57:37.287010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.497 [2024-12-09 11:57:37.287020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.497 [2024-12-09 11:57:37.287027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.497 [2024-12-09 11:57:37.287036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.497 [2024-12-09 11:57:37.287044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.497 [2024-12-09 11:57:37.287054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.497 [2024-12-09 11:57:37.287062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.497 [2024-12-09 11:57:37.287071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.497 [2024-12-09 11:57:37.287079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.497 [2024-12-09 11:57:37.287088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.497 [2024-12-09 11:57:37.287096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.497 [2024-12-09 11:57:37.287105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.497 [2024-12-09 11:57:37.287113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.497 [2024-12-09 11:57:37.287123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.497 [2024-12-09 11:57:37.287130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.497 [2024-12-09 11:57:37.287139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.497 [2024-12-09 11:57:37.287147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.497 [2024-12-09 11:57:37.287156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.497 [2024-12-09 11:57:37.287164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.497 [2024-12-09 11:57:37.287173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.497 [2024-12-09 11:57:37.287180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.497 [2024-12-09 11:57:37.287190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.497 [2024-12-09 11:57:37.287198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.497 [2024-12-09 11:57:37.287207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.497 [2024-12-09 11:57:37.287217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.497 [2024-12-09 11:57:37.287226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.497 [2024-12-09 11:57:37.287234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.497 [2024-12-09 11:57:37.287242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550000 is same with the state(6) to be set 00:22:29.497 [2024-12-09 11:57:37.290350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:29.497 [2024-12-09 11:57:37.290377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:29.497 [2024-12-09 11:57:37.290390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:29.497 task offset: 25856 on job bdev=Nvme2n1 fails 00:22:29.497 00:22:29.497 Latency(us) 00:22:29.497 [2024-12-09T10:57:37.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.497 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.497 Job: Nvme1n1 ended in about 0.98 seconds with error 00:22:29.497 Verification LBA range: start 0x0 length 0x400 00:22:29.497 Nvme1n1 : 0.98 130.32 8.15 65.16 0.00 323911.68 16493.23 277872.64 00:22:29.497 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.497 Job: Nvme2n1 ended in about 0.93 seconds with error 00:22:29.497 Verification LBA range: start 0x0 length 0x400 00:22:29.497 Nvme2n1 : 0.93 205.45 12.84 68.48 0.00 226251.31 6062.08 272629.76 00:22:29.497 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.497 Job: Nvme3n1 ended in about 0.96 seconds with error 00:22:29.497 Verification LBA range: start 0x0 length 0x400 00:22:29.497 Nvme3n1 : 0.96 199.71 12.48 66.57 0.00 228160.85 17803.95 251658.24 00:22:29.497 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.497 Job: Nvme4n1 ended in about 0.96 seconds with error 00:22:29.497 Verification LBA range: start 0x0 length 0x400 00:22:29.497 Nvme4n1 : 0.96 199.39 12.46 66.46 0.00 223768.96 22282.24 244667.73 00:22:29.497 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.497 Job: Nvme5n1 ended in about 0.97 seconds with error 00:22:29.497 Verification LBA range: start 0x0 length 0x400 00:22:29.497 Nvme5n1 : 0.97 137.93 8.62 65.88 0.00 286062.99 22173.01 255153.49 00:22:29.497 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.497 Job: Nvme6n1 ended in about 0.98 seconds with error 00:22:29.497 Verification LBA range: start 0x0 length 0x400 00:22:29.497 Nvme6n1 : 0.98 130.00 8.13 65.00 0.00 293097.24 22828.37 270882.13 00:22:29.497 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.497 Job: Nvme7n1 ended in about 0.99 seconds with error 00:22:29.497 Verification LBA range: start 0x0 length 0x400 00:22:29.497 Nvme7n1 : 0.99 194.53 12.16 64.84 0.00 215585.92 16056.32 253405.87 00:22:29.497 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.497 Job: Nvme8n1 ended in about 0.97 seconds with error 00:22:29.497 Verification LBA range: start 0x0 length 0x400 00:22:29.497 Nvme8n1 : 0.97 197.07 12.32 65.69 0.00 207651.20 9939.63 244667.73 00:22:29.497 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.497 Job: Nvme9n1 ended in about 0.96 seconds with error 00:22:29.497 Verification LBA range: start 0x0 length 0x400 00:22:29.497 Nvme9n1 : 0.96 199.03 12.44 66.34 0.00 200532.05 22828.37 249910.61 00:22:29.497 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.497 Job: Nvme10n1 ended in about 0.99 seconds with error 00:22:29.497 Verification LBA range: start 0x0 length 0x400 00:22:29.497 Nvme10n1 : 0.99 129.37 8.09 64.69 0.00 269366.61 15947.09 253405.87 00:22:29.497 [2024-12-09T10:57:37.383Z] =================================================================================================================== 00:22:29.497 [2024-12-09T10:57:37.383Z] Total : 1722.81 107.68 659.12 0.00 242477.86 6062.08 277872.64 00:22:29.497 [2024-12-09 11:57:37.318889] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:29.497 [2024-12-09 11:57:37.318942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:29.497 [2024-12-09 11:57:37.319582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.497 [2024-12-09 11:57:37.319604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130f8d0 with addr=10.0.0.2, port=4420 00:22:29.497 [2024-12-09 11:57:37.319616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f8d0 is same with the state(6) to be set 00:22:29.497 [2024-12-09 11:57:37.319937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.497 [2024-12-09 11:57:37.319948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17336c0 with addr=10.0.0.2, port=4420 00:22:29.497 [2024-12-09 11:57:37.319956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17336c0 is same with the state(6) to be set 00:22:29.497 [2024-12-09 11:57:37.320161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.497 [2024-12-09 11:57:37.320174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1227610 with addr=10.0.0.2, port=4420 00:22:29.497 [2024-12-09 11:57:37.320181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1227610 is same with the state(6) to be set 00:22:29.497 [2024-12-09 11:57:37.320506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.497 [2024-12-09 11:57:37.320516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1760c70 with addr=10.0.0.2, port=4420 00:22:29.497 [2024-12-09 11:57:37.320524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1760c70 is same with the state(6) to be set 00:22:29.497 [2024-12-09 11:57:37.320547] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:22:29.498 [2024-12-09 11:57:37.320558] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:22:29.498 [2024-12-09 11:57:37.320570] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:22:29.498 [2024-12-09 11:57:37.320582] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:22:29.498 [2024-12-09 11:57:37.320594] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:22:29.498 [2024-12-09 11:57:37.320605] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:22:29.498 [2024-12-09 11:57:37.321688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:29.498 [2024-12-09 11:57:37.321704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:29.498 [2024-12-09 11:57:37.321714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:29.498 [2024-12-09 11:57:37.321723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:29.498 [2024-12-09 11:57:37.321732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:29.498 [2024-12-09 11:57:37.321742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:29.498 [2024-12-09 11:57:37.321807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130f8d0 (9): Bad file descriptor 00:22:29.498 [2024-12-09 11:57:37.321826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17336c0 (9): Bad file descriptor 00:22:29.498 [2024-12-09 11:57:37.321836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1227610 (9): Bad file descriptor 00:22:29.498 [2024-12-09 11:57:37.321846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1760c70 (9): Bad file descriptor 00:22:29.498 [2024-12-09 11:57:37.322209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.498 [2024-12-09 11:57:37.322223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130e960 with addr=10.0.0.2, port=4420 00:22:29.498 [2024-12-09 11:57:37.322231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130e960 is same with the state(6) to be set 00:22:29.498 [2024-12-09 11:57:37.322571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.498 [2024-12-09 11:57:37.322582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x177cba0 with addr=10.0.0.2, port=4420 00:22:29.498 [2024-12-09 11:57:37.322589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177cba0 is same with the state(6) to be set 00:22:29.498 [2024-12-09 11:57:37.322940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.498 [2024-12-09 11:57:37.322951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130f460 with addr=10.0.0.2, port=4420 00:22:29.498 [2024-12-09 11:57:37.322958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f460 is same with the state(6) to be set 00:22:29.498 [2024-12-09 11:57:37.323181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.498 [2024-12-09 11:57:37.323191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130bc90 with addr=10.0.0.2, port=4420 00:22:29.498 [2024-12-09 11:57:37.323198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130bc90 is same with the state(6) to be set 00:22:29.498 [2024-12-09 11:57:37.323360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.498 [2024-12-09 11:57:37.323372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130dbd0 with addr=10.0.0.2, port=4420 00:22:29.498 [2024-12-09 11:57:37.323380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130dbd0 is same with the state(6) to be set 00:22:29.498 [2024-12-09 11:57:37.323563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.498 [2024-12-09 11:57:37.323573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1769fd0 with addr=10.0.0.2, port=4420 00:22:29.498 [2024-12-09 11:57:37.323581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1769fd0 is same with the state(6) to be set 00:22:29.498 [2024-12-09 11:57:37.323590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:29.498 [2024-12-09 11:57:37.323597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:29.498 [2024-12-09 11:57:37.323605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:29.498 [2024-12-09 11:57:37.323615] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:29.498 [2024-12-09 11:57:37.323623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:29.498 [2024-12-09 11:57:37.323630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:29.498 [2024-12-09 11:57:37.323640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:29.498 [2024-12-09 11:57:37.323646] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:29.498 [2024-12-09 11:57:37.323657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:29.498 [2024-12-09 11:57:37.323664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:29.498 [2024-12-09 11:57:37.323670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:29.498 [2024-12-09 11:57:37.323677] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:29.498 [2024-12-09 11:57:37.323684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:29.498 [2024-12-09 11:57:37.323690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:29.498 [2024-12-09 11:57:37.323697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:29.498 [2024-12-09 11:57:37.323704] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:29.498 [2024-12-09 11:57:37.323766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130e960 (9): Bad file descriptor 00:22:29.498 [2024-12-09 11:57:37.323777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177cba0 (9): Bad file descriptor 00:22:29.498 [2024-12-09 11:57:37.323787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130f460 (9): Bad file descriptor 00:22:29.498 [2024-12-09 11:57:37.323796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130bc90 (9): Bad file descriptor 00:22:29.498 [2024-12-09 11:57:37.323805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130dbd0 (9): Bad file descriptor 00:22:29.498 [2024-12-09 11:57:37.323814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1769fd0 (9): Bad file descriptor 00:22:29.498 [2024-12-09 11:57:37.323850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:29.498 [2024-12-09 11:57:37.323858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:29.498 [2024-12-09 11:57:37.323866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:29.498 [2024-12-09 11:57:37.323873] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:29.498 [2024-12-09 11:57:37.323881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:29.498 [2024-12-09 11:57:37.323888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:29.498 [2024-12-09 11:57:37.323894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:29.498 [2024-12-09 11:57:37.323901] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:29.498 [2024-12-09 11:57:37.323908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:29.498 [2024-12-09 11:57:37.323915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:29.498 [2024-12-09 11:57:37.323922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:29.498 [2024-12-09 11:57:37.323928] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:29.498 [2024-12-09 11:57:37.323935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:29.498 [2024-12-09 11:57:37.323942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:29.498 [2024-12-09 11:57:37.323950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:29.498 [2024-12-09 11:57:37.323959] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:29.498 [2024-12-09 11:57:37.323966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:29.498 [2024-12-09 11:57:37.323973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:29.498 [2024-12-09 11:57:37.323980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:29.498 [2024-12-09 11:57:37.323986] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:29.498 [2024-12-09 11:57:37.323994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:29.498 [2024-12-09 11:57:37.324000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:29.498 [2024-12-09 11:57:37.324007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:29.498 [2024-12-09 11:57:37.324013] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:29.760 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 112772 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 112772 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 112772 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # sync 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # set +e 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # for i in {1..20} 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:22:30.702 rmmod nvme_tcp 00:22:30.702 rmmod nvme_fabrics 00:22:30.702 rmmod nvme_keyring 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # set -e 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@130 -- # return 0 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@513 -- # '[' -n 112394 ']' 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # killprocess 112394 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 112394 ']' 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 112394 00:22:30.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (112394) - No such process 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 112394 is not found' 00:22:30.702 Process with pid 112394 is not found 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # iptr 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-save 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-restore 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # remove_spdk_ns 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.702 11:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.250 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:22:33.250 00:22:33.250 real 0m7.787s 00:22:33.250 user 0m19.056s 00:22:33.250 sys 0m1.296s 00:22:33.250 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:33.250 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:33.250 ************************************ 00:22:33.250 END TEST nvmf_shutdown_tc3 00:22:33.250 ************************************ 00:22:33.250 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:33.250 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:33.251 ************************************ 00:22:33.251 START TEST nvmf_shutdown_tc4 00:22:33.251 ************************************ 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@310 -- # xtrace_disable 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_devs=() 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_devs 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_net_devs=() 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@318 -- # pci_drivers=() 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@318 -- # local -A pci_drivers 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # net_devs=() 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga net_devs 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # e810=() 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga e810 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # x722=() 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga x722 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@323 -- # mlx=() 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@323 -- # local -ga mlx 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:33.251 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:33.251 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:33.251 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:33.251 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # is_hw=yes 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.251 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:22:33.252 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.252 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.252 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:22:33.252 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:22:33.252 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.252 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.252 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:22:33.252 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:22:33.252 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.252 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.252 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.252 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.252 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:22:33.252 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:22:33.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:22:33.252 00:22:33.252 --- 10.0.0.2 ping statistics --- 00:22:33.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.252 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:22:33.252 00:22:33.252 --- 10.0.0.1 ping statistics --- 00:22:33.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.252 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # return 0 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # nvmfpid=114124 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # waitforlisten 114124 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 114124 ']' 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.252 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:33.513 [2024-12-09 11:57:41.184591] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:22:33.513 [2024-12-09 11:57:41.184667] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.513 [2024-12-09 11:57:41.275967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:33.513 [2024-12-09 11:57:41.310723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.513 [2024-12-09 11:57:41.310757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.513 [2024-12-09 11:57:41.310763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.513 [2024-12-09 11:57:41.310768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.513 [2024-12-09 11:57:41.310772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.513 [2024-12-09 11:57:41.312299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.513 [2024-12-09 11:57:41.312458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:33.513 [2024-12-09 11:57:41.312614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.513 [2024-12-09 11:57:41.312616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:34.458 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:34.458 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:22:34.458 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:34.458 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:34.458 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:34.458 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.458 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:34.458 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.458 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:34.458 [2024-12-09 11:57:42.036558] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.459 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:34.459 Malloc1 00:22:34.459 [2024-12-09 11:57:42.144415] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.459 Malloc2 00:22:34.459 Malloc3 00:22:34.459 Malloc4 00:22:34.459 Malloc5 00:22:34.459 Malloc6 00:22:34.720 Malloc7 00:22:34.720 Malloc8 00:22:34.720 Malloc9 00:22:34.720 Malloc10 00:22:34.720 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.720 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:34.720 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:34.720 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:34.720 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=114335 00:22:34.720 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:34.720 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:34.982 [2024-12-09 11:57:42.621040] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:40.279 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:40.279 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 114124 00:22:40.279 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 114124 ']' 00:22:40.279 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 114124 00:22:40.279 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:22:40.279 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:40.279 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 114124 00:22:40.279 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:40.279 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:40.279 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 114124' 00:22:40.279 killing process with pid 114124 00:22:40.279 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 114124 00:22:40.279 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 114124 00:22:40.279 [2024-12-09 11:57:47.619898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7c40 is same with the state(6) to be set 00:22:40.279 [2024-12-09 11:57:47.619938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7c40 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.619944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7c40 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.619950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7c40 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.619955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7c40 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.619960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7c40 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.619965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7c40 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.619970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7c40 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.619974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7c40 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.619979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7c40 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.619989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7c40 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.619994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7c40 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.619999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7c40 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.620004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7c40 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.620009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7c40 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.620014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7c40 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.620019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7c40 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.620023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7c40 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.620359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8130 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.620384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8130 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.620711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8620 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.620739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8620 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.620745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8620 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.620750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8620 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.620755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8620 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.621228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7770 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.621250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7770 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.621256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7770 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.621261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7770 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.621266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7770 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.621271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7770 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.621798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcea360 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.621813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcea360 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.621854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcea830 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.621871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcea830 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.621876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcea830 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.621881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcea830 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.621890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcea830 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.621895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcea830 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.622148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcead00 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.622163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcead00 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.622169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcead00 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.622174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcead00 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.622179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcead00 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.622183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcead00 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.622188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcead00 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.622192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcead00 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.622304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9e90 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.622319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9e90 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.622325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9e90 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.622655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8fe0 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.622666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8fe0 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.622672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8fe0 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.622677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8fe0 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.622682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8fe0 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.622687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8fe0 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.622946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce94d0 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.622958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce94d0 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.622965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce94d0 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.622971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce94d0 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.622976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce94d0 is same with the state(6) to be set 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.280 starting I/O failed: -6 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.280 starting I/O failed: -6 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.280 [2024-12-09 11:57:47.623194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce99a0 is same with tWrite completed with error (sct=0, sc=8) 00:22:40.280 he state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.623219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce99a0 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.623224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce99a0 is same with the state(6) to be set 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.280 [2024-12-09 11:57:47.623228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce99a0 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.623234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce99a0 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.623239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce99a0 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.623244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce99a0 is same with the state(6) to be set 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.280 starting I/O failed: -6 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.280 starting I/O failed: -6 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.280 starting I/O failed: -6 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.280 [2024-12-09 11:57:47.623433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8b10 is same with the state(6) to be set 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.280 [2024-12-09 11:57:47.623446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8b10 is same with the state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.623451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8b10 is same with tstarting I/O failed: -6 00:22:40.280 he state(6) to be set 00:22:40.280 [2024-12-09 11:57:47.623458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8b10 is same with the state(6) to be set 00:22:40.280 Write completed with error (sct=0, sc=8) 00:22:40.281 [2024-12-09 11:57:47.623463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8b10 is same with the state(6) to be set 00:22:40.281 [2024-12-09 11:57:47.623468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8b10 is same with the state(6) to be set 00:22:40.281 [2024-12-09 11:57:47.623473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8b10 is same with the state(6) to be set 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 [2024-12-09 11:57:47.623478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8b10 is same with the state(6) to be set 00:22:40.281 [2024-12-09 11:57:47.623483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8b10 is same with the state(6) to be set 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 [2024-12-09 11:57:47.623630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.281 starting I/O failed: -6 00:22:40.281 starting I/O failed: -6 00:22:40.281 starting I/O failed: -6 00:22:40.281 NVMe io qpair process completion error 00:22:40.281 [2024-12-09 11:57:47.625795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceb6a0 is same with the state(6) to be set 00:22:40.281 [2024-12-09 11:57:47.625809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceb6a0 is same with the state(6) to be set 00:22:40.281 [2024-12-09 11:57:47.625814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceb6a0 is same with the state(6) to be set 00:22:40.281 [2024-12-09 11:57:47.625845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcebb70 is same with the state(6) to be set 00:22:40.281 [2024-12-09 11:57:47.625860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcebb70 is same with the state(6) to be set 00:22:40.281 [2024-12-09 11:57:47.625865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcebb70 is same with the state(6) to be set 00:22:40.281 [2024-12-09 11:57:47.625870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcebb70 is same with the state(6) to be set 00:22:40.281 [2024-12-09 11:57:47.625874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcebb70 is same with the state(6) to be set 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 [2024-12-09 11:57:47.626130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceb1d0 is same with the state(6) to be set 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 [2024-12-09 11:57:47.626144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceb1d0 is same with the state(6) to be set 00:22:40.281 starting I/O failed: -6 00:22:40.281 [2024-12-09 11:57:47.626149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceb1d0 is same with the state(6) to be set 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 [2024-12-09 11:57:47.626155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceb1d0 is same with the state(6) to be set 00:22:40.281 [2024-12-09 11:57:47.626160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceb1d0 is same with the state(6) to be set 00:22:40.281 [2024-12-09 11:57:47.626165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceb1d0 is same with the state(6) to be set 00:22:40.281 [2024-12-09 11:57:47.626169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceb1d0 is same with the state(6) to be set 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 [2024-12-09 11:57:47.626268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:40.281 starting I/O failed: -6 00:22:40.281 starting I/O failed: -6 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 [2024-12-09 11:57:47.627356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.281 Write completed with error (sct=0, sc=8) 00:22:40.281 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 [2024-12-09 11:57:47.628513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:40.282 NVMe io qpair process completion error 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 [2024-12-09 11:57:47.628975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe13130 is same with the state(6) to be set 00:22:40.282 [2024-12-09 11:57:47.628991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe13130 is same with the state(6) to be set 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 [2024-12-09 11:57:47.628996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe13130 is same with the state(6) to be set 00:22:40.282 [2024-12-09 11:57:47.629001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe13130 is same with the state(6) to be set 00:22:40.282 starting I/O failed: -6 00:22:40.282 [2024-12-09 11:57:47.629006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe13130 is same with the state(6) to be set 00:22:40.282 [2024-12-09 11:57:47.629011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe13130 is same with tWrite completed with error (sct=0, sc=8) 00:22:40.282 he state(6) to be set 00:22:40.282 [2024-12-09 11:57:47.629017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe13130 is same with the state(6) to be set 00:22:40.282 [2024-12-09 11:57:47.629022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe13130 is same with the state(6) to be set 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 [2024-12-09 11:57:47.629559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:40.282 starting I/O failed: -6 00:22:40.282 starting I/O failed: -6 00:22:40.282 starting I/O failed: -6 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 [2024-12-09 11:57:47.630881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.282 Write completed with error (sct=0, sc=8) 00:22:40.282 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 [2024-12-09 11:57:47.631805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 [2024-12-09 11:57:47.633271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.283 NVMe io qpair process completion error 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 starting I/O failed: -6 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 Write completed with error (sct=0, sc=8) 00:22:40.283 [2024-12-09 11:57:47.634457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:40.284 starting I/O failed: -6 00:22:40.284 starting I/O failed: -6 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 [2024-12-09 11:57:47.635394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 [2024-12-09 11:57:47.636357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.284 Write completed with error (sct=0, sc=8) 00:22:40.284 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 [2024-12-09 11:57:47.638611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.285 NVMe io qpair process completion error 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 [2024-12-09 11:57:47.639602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 [2024-12-09 11:57:47.640400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.285 Write completed with error (sct=0, sc=8) 00:22:40.285 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 [2024-12-09 11:57:47.641324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 [2024-12-09 11:57:47.642970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:40.286 NVMe io qpair process completion error 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 starting I/O failed: -6 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.286 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 [2024-12-09 11:57:47.644034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 [2024-12-09 11:57:47.644862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 [2024-12-09 11:57:47.645792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.287 Write completed with error (sct=0, sc=8) 00:22:40.287 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 [2024-12-09 11:57:47.647652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:40.288 NVMe io qpair process completion error 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 [2024-12-09 11:57:47.649012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:40.288 starting I/O failed: -6 00:22:40.288 starting I/O failed: -6 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 [2024-12-09 11:57:47.649981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 Write completed with error (sct=0, sc=8) 00:22:40.288 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 [2024-12-09 11:57:47.650885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 [2024-12-09 11:57:47.653734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:40.289 NVMe io qpair process completion error 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 starting I/O failed: -6 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.289 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 [2024-12-09 11:57:47.655000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:40.290 starting I/O failed: -6 00:22:40.290 starting I/O failed: -6 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 [2024-12-09 11:57:47.655985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 [2024-12-09 11:57:47.656904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.290 Write completed with error (sct=0, sc=8) 00:22:40.290 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 [2024-12-09 11:57:47.658349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:40.291 NVMe io qpair process completion error 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 [2024-12-09 11:57:47.659482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 [2024-12-09 11:57:47.660298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 starting I/O failed: -6 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.291 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 [2024-12-09 11:57:47.661218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 [2024-12-09 11:57:47.664167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:40.292 NVMe io qpair process completion error 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 [2024-12-09 11:57:47.665295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.292 starting I/O failed: -6 00:22:40.292 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 [2024-12-09 11:57:47.666209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 [2024-12-09 11:57:47.667102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.293 Write completed with error (sct=0, sc=8) 00:22:40.293 starting I/O failed: -6 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 starting I/O failed: -6 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 starting I/O failed: -6 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 starting I/O failed: -6 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 starting I/O failed: -6 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 starting I/O failed: -6 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 starting I/O failed: -6 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 starting I/O failed: -6 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 starting I/O failed: -6 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 starting I/O failed: -6 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 starting I/O failed: -6 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 starting I/O failed: -6 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 starting I/O failed: -6 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 starting I/O failed: -6 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 starting I/O failed: -6 00:22:40.294 [2024-12-09 11:57:47.668698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:40.294 NVMe io qpair process completion error 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.294 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Write completed with error (sct=0, sc=8) 00:22:40.295 Initializing NVMe Controllers 00:22:40.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:40.295 Controller IO queue size 128, less than required. 00:22:40.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:40.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:40.295 Controller IO queue size 128, less than required. 00:22:40.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:40.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:40.295 Controller IO queue size 128, less than required. 00:22:40.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:40.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:40.295 Controller IO queue size 128, less than required. 00:22:40.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:40.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:40.295 Controller IO queue size 128, less than required. 00:22:40.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:40.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:40.295 Controller IO queue size 128, less than required. 00:22:40.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:40.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:40.295 Controller IO queue size 128, less than required. 00:22:40.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:40.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:40.295 Controller IO queue size 128, less than required. 00:22:40.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:40.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:40.295 Controller IO queue size 128, less than required. 00:22:40.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:40.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:40.295 Controller IO queue size 128, less than required. 00:22:40.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:40.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:40.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:40.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:40.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:40.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:40.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:40.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:40.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:40.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:40.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:40.295 Initialization complete. Launching workers. 00:22:40.295 ======================================================== 00:22:40.295 Latency(us) 00:22:40.295 Device Information : IOPS MiB/s Average min max 00:22:40.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1886.89 81.08 67857.77 672.36 123944.09 00:22:40.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1885.17 81.00 67962.52 665.03 151463.58 00:22:40.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1887.97 81.12 67881.80 831.10 128160.74 00:22:40.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1911.17 82.12 67095.84 912.63 130856.90 00:22:40.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1836.18 78.90 69767.65 647.29 119955.81 00:22:40.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1875.93 80.61 68002.72 450.64 120069.51 00:22:40.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1856.16 79.76 68425.16 613.83 121181.60 00:22:40.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1903.01 81.77 66762.18 674.88 121987.35 00:22:40.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1887.54 81.11 67337.44 828.52 121628.79 00:22:40.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1913.75 82.23 66434.72 722.94 120651.88 00:22:40.295 ======================================================== 00:22:40.295 Total : 18843.77 809.69 67742.52 450.64 151463.58 00:22:40.295 00:22:40.295 [2024-12-09 11:57:47.678541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x978410 is same with the state(6) to be set 00:22:40.295 [2024-12-09 11:57:47.678589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x977890 is same with the state(6) to be set 00:22:40.295 [2024-12-09 11:57:47.678618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x979900 is same with the state(6) to be set 00:22:40.295 [2024-12-09 11:57:47.678654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x977560 is same with the state(6) to be set 00:22:40.295 [2024-12-09 11:57:47.678692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x979ae0 is same with the state(6) to be set 00:22:40.295 [2024-12-09 11:57:47.678722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x977ef0 is same with the state(6) to be set 00:22:40.295 [2024-12-09 11:57:47.678751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x978a70 is same with the state(6) to be set 00:22:40.295 [2024-12-09 11:57:47.678778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x977bc0 is same with the state(6) to be set 00:22:40.295 [2024-12-09 11:57:47.678806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x978740 is same with the state(6) to be set 00:22:40.295 [2024-12-09 11:57:47.678834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x979720 is same with the state(6) to be set 00:22:40.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:40.295 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 114335 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 114335 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 114335 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@122 -- # sync 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # set +e 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # for i in {1..20} 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:22:41.236 rmmod nvme_tcp 00:22:41.236 rmmod nvme_fabrics 00:22:41.236 rmmod nvme_keyring 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # set -e 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@130 -- # return 0 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@513 -- # '[' -n 114124 ']' 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # killprocess 114124 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 114124 ']' 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 114124 00:22:41.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (114124) - No such process 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 114124 is not found' 00:22:41.236 Process with pid 114124 is not found 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # iptr 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-save 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-restore 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:41.236 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # remove_spdk_ns 00:22:41.237 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.237 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.237 11:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.149 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:22:43.410 00:22:43.410 real 0m10.292s 00:22:43.410 user 0m27.997s 00:22:43.410 sys 0m4.061s 00:22:43.410 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:43.410 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:43.410 ************************************ 00:22:43.410 END TEST nvmf_shutdown_tc4 00:22:43.410 ************************************ 00:22:43.410 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:43.410 00:22:43.410 real 0m42.961s 00:22:43.410 user 1m43.850s 00:22:43.410 sys 0m13.837s 00:22:43.410 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:43.410 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:43.410 ************************************ 00:22:43.410 END TEST nvmf_shutdown 00:22:43.411 ************************************ 00:22:43.411 11:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:43.411 11:57:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:43.411 11:57:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:43.411 11:57:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:43.411 ************************************ 00:22:43.411 START TEST nvmf_nsid 00:22:43.411 ************************************ 00:22:43.411 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:43.411 * Looking for test storage... 00:22:43.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:43.411 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:43.411 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:22:43.411 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:43.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.672 --rc genhtml_branch_coverage=1 00:22:43.672 --rc genhtml_function_coverage=1 00:22:43.672 --rc genhtml_legend=1 00:22:43.672 --rc geninfo_all_blocks=1 00:22:43.672 --rc geninfo_unexecuted_blocks=1 00:22:43.672 00:22:43.672 ' 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:43.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.672 --rc genhtml_branch_coverage=1 00:22:43.672 --rc genhtml_function_coverage=1 00:22:43.672 --rc genhtml_legend=1 00:22:43.672 --rc geninfo_all_blocks=1 00:22:43.672 --rc geninfo_unexecuted_blocks=1 00:22:43.672 00:22:43.672 ' 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:43.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.672 --rc genhtml_branch_coverage=1 00:22:43.672 --rc genhtml_function_coverage=1 00:22:43.672 --rc genhtml_legend=1 00:22:43.672 --rc geninfo_all_blocks=1 00:22:43.672 --rc geninfo_unexecuted_blocks=1 00:22:43.672 00:22:43.672 ' 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:43.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.672 --rc genhtml_branch_coverage=1 00:22:43.672 --rc genhtml_function_coverage=1 00:22:43.672 --rc genhtml_legend=1 00:22:43.672 --rc geninfo_all_blocks=1 00:22:43.672 --rc geninfo_unexecuted_blocks=1 00:22:43.672 00:22:43.672 ' 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # : 0 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:22:43.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@56 -- # have_pci_nics=0 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:43.672 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:43.673 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:43.673 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:43.673 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.673 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:43.673 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:43.673 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:43.673 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.673 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.673 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.673 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:43.673 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:43.673 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@310 -- # xtrace_disable 00:22:43.673 11:57:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:51.810 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.810 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_devs=() 00:22:51.810 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_devs 00:22:51.810 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_net_devs=() 00:22:51.810 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:22:51.810 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@318 -- # pci_drivers=() 00:22:51.810 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@318 -- # local -A pci_drivers 00:22:51.810 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # net_devs=() 00:22:51.810 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga net_devs 00:22:51.810 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # e810=() 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga e810 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # x722=() 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga x722 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@323 -- # mlx=() 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@323 -- # local -ga mlx 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:51.811 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:51.811 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:51.811 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:51.811 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # is_hw=yes 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:22:51.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:22:51.811 00:22:51.811 --- 10.0.0.2 ping statistics --- 00:22:51.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.811 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:22:51.811 00:22:51.811 --- 10.0.0.1 ping statistics --- 00:22:51.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.811 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:22:51.811 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.812 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # return 0 00:22:51.812 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:51.812 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.812 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:51.812 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:51.812 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.812 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:51.812 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:51.812 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:51.812 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:51.812 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.812 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:51.812 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@505 -- # nvmfpid=119805 00:22:51.812 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@506 -- # waitforlisten 119805 00:22:51.812 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:51.812 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 119805 ']' 00:22:51.812 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.812 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.812 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.812 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.812 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:51.812 [2024-12-09 11:57:58.905526] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:22:51.812 [2024-12-09 11:57:58.905598] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.812 [2024-12-09 11:57:59.002786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.812 [2024-12-09 11:57:59.054528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.812 [2024-12-09 11:57:59.054584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.812 [2024-12-09 11:57:59.054592] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.812 [2024-12-09 11:57:59.054599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.812 [2024-12-09 11:57:59.054605] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.812 [2024-12-09 11:57:59.055383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=120004 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@765 -- # local ip 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@766 -- # ip_candidates=() 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@766 -- # local -A ip_candidates 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=f28e54ce-8118-4664-8bea-245f70d4d2b8 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=945bf072-dc71-4aab-aaa2-bbb3e2c553c7 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=88a53a65-1090-4474-9179-8279969ddb39 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:52.073 null0 00:22:52.073 null1 00:22:52.073 null2 00:22:52.073 [2024-12-09 11:57:59.819973] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.073 [2024-12-09 11:57:59.820350] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:22:52.073 [2024-12-09 11:57:59.820408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120004 ] 00:22:52.073 [2024-12-09 11:57:59.844247] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 120004 /var/tmp/tgt2.sock 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 120004 ']' 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:52.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.073 11:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:52.073 [2024-12-09 11:57:59.912713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.334 [2024-12-09 11:57:59.964874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.595 11:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.595 11:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:52.595 11:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:52.856 [2024-12-09 11:58:00.539217] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.856 [2024-12-09 11:58:00.555403] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:52.856 nvme0n1 nvme0n2 00:22:52.856 nvme1n1 00:22:52.856 11:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:52.856 11:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:52.856 11:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:54.239 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:54.239 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:54.239 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:54.239 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:54.239 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:54.239 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:54.239 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:54.239 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:54.239 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:54.239 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:54.239 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:54.239 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:54.239 11:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:55.182 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:55.182 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:55.442 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:55.442 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:55.442 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:55.442 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid f28e54ce-8118-4664-8bea-245f70d4d2b8 00:22:55.442 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # tr -d - 00:22:55.442 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:55.442 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:55.442 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:55.442 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:55.442 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f28e54ce811846648bea245f70d4d2b8 00:22:55.442 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F28E54CE811846648BEA245F70D4D2B8 00:22:55.442 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ F28E54CE811846648BEA245F70D4D2B8 == \F\2\8\E\5\4\C\E\8\1\1\8\4\6\6\4\8\B\E\A\2\4\5\F\7\0\D\4\D\2\B\8 ]] 00:22:55.442 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:55.442 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:55.442 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:55.442 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:55.442 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:55.442 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:55.442 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:55.442 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 945bf072-dc71-4aab-aaa2-bbb3e2c553c7 00:22:55.442 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # tr -d - 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=945bf072dc714aabaaa2bbb3e2c553c7 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 945BF072DC714AABAAA2BBB3E2C553C7 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 945BF072DC714AABAAA2BBB3E2C553C7 == \9\4\5\B\F\0\7\2\D\C\7\1\4\A\A\B\A\A\A\2\B\B\B\3\E\2\C\5\5\3\C\7 ]] 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 88a53a65-1090-4474-9179-8279969ddb39 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # tr -d - 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=88a53a651090447491798279969ddb39 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 88A53A651090447491798279969DDB39 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 88A53A651090447491798279969DDB39 == \8\8\A\5\3\A\6\5\1\0\9\0\4\4\7\4\9\1\7\9\8\2\7\9\9\6\9\D\D\B\3\9 ]] 00:22:55.443 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:55.704 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:55.704 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:55.704 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 120004 00:22:55.704 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 120004 ']' 00:22:55.704 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 120004 00:22:55.704 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:55.704 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:55.705 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 120004 00:22:55.705 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:55.705 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:55.705 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 120004' 00:22:55.705 killing process with pid 120004 00:22:55.705 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 120004 00:22:55.705 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 120004 00:22:55.965 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:55.965 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:55.965 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@122 -- # sync 00:22:55.965 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:22:55.965 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # set +e 00:22:55.965 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # for i in {1..20} 00:22:55.965 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:22:55.965 rmmod nvme_tcp 00:22:55.965 rmmod nvme_fabrics 00:22:55.965 rmmod nvme_keyring 00:22:55.965 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:22:55.965 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # set -e 00:22:55.965 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@130 -- # return 0 00:22:55.965 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@513 -- # '[' -n 119805 ']' 00:22:55.965 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@514 -- # killprocess 119805 00:22:55.965 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 119805 ']' 00:22:55.965 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 119805 00:22:55.965 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:55.965 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:55.965 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 119805 00:22:56.225 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:56.225 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:56.225 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 119805' 00:22:56.225 killing process with pid 119805 00:22:56.225 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 119805 00:22:56.225 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 119805 00:22:56.225 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:56.225 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:56.225 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:56.225 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # iptr 00:22:56.225 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # iptables-save 00:22:56.225 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:56.225 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # iptables-restore 00:22:56.225 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:56.225 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # remove_spdk_ns 00:22:56.225 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.225 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.225 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.767 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:22:58.767 00:22:58.767 real 0m14.893s 00:22:58.767 user 0m11.378s 00:22:58.767 sys 0m6.875s 00:22:58.767 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:58.767 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:58.767 ************************************ 00:22:58.767 END TEST nvmf_nsid 00:22:58.767 ************************************ 00:22:58.767 11:58:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:58.767 00:22:58.767 real 12m51.170s 00:22:58.767 user 27m0.278s 00:22:58.767 sys 3m50.593s 00:22:58.767 11:58:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:58.767 11:58:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:58.767 ************************************ 00:22:58.767 END TEST nvmf_target_extra 00:22:58.767 ************************************ 00:22:58.767 11:58:06 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:58.767 11:58:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:58.767 11:58:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:58.767 11:58:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:58.767 ************************************ 00:22:58.767 START TEST nvmf_host 00:22:58.767 ************************************ 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:58.767 * Looking for test storage... 00:22:58.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:58.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.767 --rc genhtml_branch_coverage=1 00:22:58.767 --rc genhtml_function_coverage=1 00:22:58.767 --rc genhtml_legend=1 00:22:58.767 --rc geninfo_all_blocks=1 00:22:58.767 --rc geninfo_unexecuted_blocks=1 00:22:58.767 00:22:58.767 ' 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:58.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.767 --rc genhtml_branch_coverage=1 00:22:58.767 --rc genhtml_function_coverage=1 00:22:58.767 --rc genhtml_legend=1 00:22:58.767 --rc geninfo_all_blocks=1 00:22:58.767 --rc geninfo_unexecuted_blocks=1 00:22:58.767 00:22:58.767 ' 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:58.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.767 --rc genhtml_branch_coverage=1 00:22:58.767 --rc genhtml_function_coverage=1 00:22:58.767 --rc genhtml_legend=1 00:22:58.767 --rc geninfo_all_blocks=1 00:22:58.767 --rc geninfo_unexecuted_blocks=1 00:22:58.767 00:22:58.767 ' 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:58.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.767 --rc genhtml_branch_coverage=1 00:22:58.767 --rc genhtml_function_coverage=1 00:22:58.767 --rc genhtml_legend=1 00:22:58.767 --rc geninfo_all_blocks=1 00:22:58.767 --rc geninfo_unexecuted_blocks=1 00:22:58.767 00:22:58.767 ' 00:22:58.767 11:58:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # : 0 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:22:58.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@56 -- # have_pci_nics=0 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.768 ************************************ 00:22:58.768 START TEST nvmf_multicontroller 00:22:58.768 ************************************ 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:58.768 * Looking for test storage... 00:22:58.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:58.768 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:59.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.030 --rc genhtml_branch_coverage=1 00:22:59.030 --rc genhtml_function_coverage=1 00:22:59.030 --rc genhtml_legend=1 00:22:59.030 --rc geninfo_all_blocks=1 00:22:59.030 --rc geninfo_unexecuted_blocks=1 00:22:59.030 00:22:59.030 ' 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:59.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.030 --rc genhtml_branch_coverage=1 00:22:59.030 --rc genhtml_function_coverage=1 00:22:59.030 --rc genhtml_legend=1 00:22:59.030 --rc geninfo_all_blocks=1 00:22:59.030 --rc geninfo_unexecuted_blocks=1 00:22:59.030 00:22:59.030 ' 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:59.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.030 --rc genhtml_branch_coverage=1 00:22:59.030 --rc genhtml_function_coverage=1 00:22:59.030 --rc genhtml_legend=1 00:22:59.030 --rc geninfo_all_blocks=1 00:22:59.030 --rc geninfo_unexecuted_blocks=1 00:22:59.030 00:22:59.030 ' 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:59.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.030 --rc genhtml_branch_coverage=1 00:22:59.030 --rc genhtml_function_coverage=1 00:22:59.030 --rc genhtml_legend=1 00:22:59.030 --rc geninfo_all_blocks=1 00:22:59.030 --rc geninfo_unexecuted_blocks=1 00:22:59.030 00:22:59.030 ' 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # : 0 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:22:59.030 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:22:59.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@56 -- # have_pci_nics=0 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # xtrace_disable 00:22:59.031 11:58:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_devs=() 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_devs 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_net_devs=() 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # pci_drivers=() 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # local -A pci_drivers 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # net_devs=() 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga net_devs 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # e810=() 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga e810 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # x722=() 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga x722 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@323 -- # mlx=() 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@323 -- # local -ga mlx 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:07.173 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:23:07.173 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:07.174 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:07.174 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:07.174 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # is_hw=yes 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:23:07.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:23:07.174 00:23:07.174 --- 10.0.0.2 ping statistics --- 00:23:07.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.174 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.361 ms 00:23:07.174 00:23:07.174 --- 10.0.0.1 ping statistics --- 00:23:07.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.174 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # return 0 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:07.174 11:58:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:07.174 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:07.174 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:07.174 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.174 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.174 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # nvmfpid=125105 00:23:07.174 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # waitforlisten 125105 00:23:07.174 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:07.174 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 125105 ']' 00:23:07.174 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.174 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.174 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.174 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.174 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.174 [2024-12-09 11:58:14.103803] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:23:07.174 [2024-12-09 11:58:14.103874] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.174 [2024-12-09 11:58:14.202024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:07.174 [2024-12-09 11:58:14.253666] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.174 [2024-12-09 11:58:14.253718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.174 [2024-12-09 11:58:14.253727] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.175 [2024-12-09 11:58:14.253734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.175 [2024-12-09 11:58:14.253740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.175 [2024-12-09 11:58:14.255838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.175 [2024-12-09 11:58:14.256095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:07.175 [2024-12-09 11:58:14.256096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.175 [2024-12-09 11:58:14.932739] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.175 Malloc0 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.175 11:58:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.175 [2024-12-09 11:58:14.999773] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.175 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.175 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:07.175 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.175 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.175 [2024-12-09 11:58:15.011719] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:07.175 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.175 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:07.175 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.175 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.175 Malloc1 00:23:07.175 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.175 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:07.175 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.175 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.175 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.175 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:07.175 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.175 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.436 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.436 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:07.436 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.436 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.436 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.436 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:07.436 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.436 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.436 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.436 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=125287 00:23:07.436 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:07.436 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:07.436 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 125287 /var/tmp/bdevperf.sock 00:23:07.436 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 125287 ']' 00:23:07.436 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.436 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.436 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.436 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.436 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.378 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.378 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:08.378 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:08.378 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.378 11:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.378 NVMe0n1 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.378 1 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.378 request: 00:23:08.378 { 00:23:08.378 "name": "NVMe0", 00:23:08.378 "trtype": "tcp", 00:23:08.378 "traddr": "10.0.0.2", 00:23:08.378 "adrfam": "ipv4", 00:23:08.378 "trsvcid": "4420", 00:23:08.378 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.378 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:08.378 "hostaddr": "10.0.0.1", 00:23:08.378 "prchk_reftag": false, 00:23:08.378 "prchk_guard": false, 00:23:08.378 "hdgst": false, 00:23:08.378 "ddgst": false, 00:23:08.378 "allow_unrecognized_csi": false, 00:23:08.378 "method": "bdev_nvme_attach_controller", 00:23:08.378 "req_id": 1 00:23:08.378 } 00:23:08.378 Got JSON-RPC error response 00:23:08.378 response: 00:23:08.378 { 00:23:08.378 "code": -114, 00:23:08.378 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:08.378 } 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:08.378 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.379 request: 00:23:08.379 { 00:23:08.379 "name": "NVMe0", 00:23:08.379 "trtype": "tcp", 00:23:08.379 "traddr": "10.0.0.2", 00:23:08.379 "adrfam": "ipv4", 00:23:08.379 "trsvcid": "4420", 00:23:08.379 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:08.379 "hostaddr": "10.0.0.1", 00:23:08.379 "prchk_reftag": false, 00:23:08.379 "prchk_guard": false, 00:23:08.379 "hdgst": false, 00:23:08.379 "ddgst": false, 00:23:08.379 "allow_unrecognized_csi": false, 00:23:08.379 "method": "bdev_nvme_attach_controller", 00:23:08.379 "req_id": 1 00:23:08.379 } 00:23:08.379 Got JSON-RPC error response 00:23:08.379 response: 00:23:08.379 { 00:23:08.379 "code": -114, 00:23:08.379 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:08.379 } 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.379 request: 00:23:08.379 { 00:23:08.379 "name": "NVMe0", 00:23:08.379 "trtype": "tcp", 00:23:08.379 "traddr": "10.0.0.2", 00:23:08.379 "adrfam": "ipv4", 00:23:08.379 "trsvcid": "4420", 00:23:08.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.379 "hostaddr": "10.0.0.1", 00:23:08.379 "prchk_reftag": false, 00:23:08.379 "prchk_guard": false, 00:23:08.379 "hdgst": false, 00:23:08.379 "ddgst": false, 00:23:08.379 "multipath": "disable", 00:23:08.379 "allow_unrecognized_csi": false, 00:23:08.379 "method": "bdev_nvme_attach_controller", 00:23:08.379 "req_id": 1 00:23:08.379 } 00:23:08.379 Got JSON-RPC error response 00:23:08.379 response: 00:23:08.379 { 00:23:08.379 "code": -114, 00:23:08.379 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:08.379 } 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.379 request: 00:23:08.379 { 00:23:08.379 "name": "NVMe0", 00:23:08.379 "trtype": "tcp", 00:23:08.379 "traddr": "10.0.0.2", 00:23:08.379 "adrfam": "ipv4", 00:23:08.379 "trsvcid": "4420", 00:23:08.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.379 "hostaddr": "10.0.0.1", 00:23:08.379 "prchk_reftag": false, 00:23:08.379 "prchk_guard": false, 00:23:08.379 "hdgst": false, 00:23:08.379 "ddgst": false, 00:23:08.379 "multipath": "failover", 00:23:08.379 "allow_unrecognized_csi": false, 00:23:08.379 "method": "bdev_nvme_attach_controller", 00:23:08.379 "req_id": 1 00:23:08.379 } 00:23:08.379 Got JSON-RPC error response 00:23:08.379 response: 00:23:08.379 { 00:23:08.379 "code": -114, 00:23:08.379 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:08.379 } 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.379 NVMe0n1 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.379 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.640 00:23:08.640 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.640 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:08.640 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:08.640 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.640 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.640 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.640 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:08.640 11:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:10.026 { 00:23:10.026 "results": [ 00:23:10.026 { 00:23:10.026 "job": "NVMe0n1", 00:23:10.026 "core_mask": "0x1", 00:23:10.026 "workload": "write", 00:23:10.026 "status": "finished", 00:23:10.026 "queue_depth": 128, 00:23:10.026 "io_size": 4096, 00:23:10.026 "runtime": 1.006583, 00:23:10.026 "iops": 22474.053307079495, 00:23:10.026 "mibps": 87.78927073077928, 00:23:10.026 "io_failed": 0, 00:23:10.026 "io_timeout": 0, 00:23:10.026 "avg_latency_us": 5682.1317301741665, 00:23:10.026 "min_latency_us": 2102.6133333333332, 00:23:10.026 "max_latency_us": 14308.693333333333 00:23:10.026 } 00:23:10.026 ], 00:23:10.026 "core_count": 1 00:23:10.026 } 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 125287 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 125287 ']' 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 125287 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125287 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125287' 00:23:10.026 killing process with pid 125287 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 125287 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 125287 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:23:10.026 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:23:10.026 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:10.026 [2024-12-09 11:58:15.130605] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:23:10.026 [2024-12-09 11:58:15.130671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125287 ] 00:23:10.026 [2024-12-09 11:58:15.218966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.026 [2024-12-09 11:58:15.255301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.026 [2024-12-09 11:58:16.350584] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name f7218be7-5005-40dd-ab18-dc87c96a43ba already exists 00:23:10.026 [2024-12-09 11:58:16.350615] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:f7218be7-5005-40dd-ab18-dc87c96a43ba alias for bdev NVMe1n1 00:23:10.026 [2024-12-09 11:58:16.350624] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:10.026 Running I/O for 1 seconds... 00:23:10.026 22430.00 IOPS, 87.62 MiB/s 00:23:10.026 Latency(us) 00:23:10.026 [2024-12-09T10:58:17.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.026 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:10.026 NVMe0n1 : 1.01 22474.05 87.79 0.00 0.00 5682.13 2102.61 14308.69 00:23:10.026 [2024-12-09T10:58:17.912Z] =================================================================================================================== 00:23:10.026 [2024-12-09T10:58:17.912Z] Total : 22474.05 87.79 0.00 0.00 5682.13 2102.61 14308.69 00:23:10.026 Received shutdown signal, test time was about 1.000000 seconds 00:23:10.026 00:23:10.026 Latency(us) 00:23:10.026 [2024-12-09T10:58:17.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.026 [2024-12-09T10:58:17.912Z] =================================================================================================================== 00:23:10.026 [2024-12-09T10:58:17.913Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:10.027 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # sync 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # set +e 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # for i in {1..20} 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:23:10.027 rmmod nvme_tcp 00:23:10.027 rmmod nvme_fabrics 00:23:10.027 rmmod nvme_keyring 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # set -e 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@130 -- # return 0 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@513 -- # '[' -n 125105 ']' 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # killprocess 125105 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 125105 ']' 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 125105 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125105 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125105' 00:23:10.027 killing process with pid 125105 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 125105 00:23:10.027 11:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 125105 00:23:10.288 11:58:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:10.288 11:58:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:10.288 11:58:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:10.288 11:58:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # iptr 00:23:10.288 11:58:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-save 00:23:10.288 11:58:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:10.288 11:58:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-restore 00:23:10.288 11:58:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:10.288 11:58:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # remove_spdk_ns 00:23:10.288 11:58:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.288 11:58:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:10.288 11:58:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.839 11:58:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:23:12.839 00:23:12.839 real 0m13.642s 00:23:12.839 user 0m16.373s 00:23:12.839 sys 0m6.334s 00:23:12.839 11:58:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:12.839 11:58:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.839 ************************************ 00:23:12.839 END TEST nvmf_multicontroller 00:23:12.839 ************************************ 00:23:12.839 11:58:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:12.839 11:58:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:12.839 11:58:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:12.839 11:58:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.839 ************************************ 00:23:12.839 START TEST nvmf_aer 00:23:12.839 ************************************ 00:23:12.839 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:12.839 * Looking for test storage... 00:23:12.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:12.839 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:12.839 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:23:12.839 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:12.839 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:12.839 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:12.839 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:12.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.840 --rc genhtml_branch_coverage=1 00:23:12.840 --rc genhtml_function_coverage=1 00:23:12.840 --rc genhtml_legend=1 00:23:12.840 --rc geninfo_all_blocks=1 00:23:12.840 --rc geninfo_unexecuted_blocks=1 00:23:12.840 00:23:12.840 ' 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:12.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.840 --rc genhtml_branch_coverage=1 00:23:12.840 --rc genhtml_function_coverage=1 00:23:12.840 --rc genhtml_legend=1 00:23:12.840 --rc geninfo_all_blocks=1 00:23:12.840 --rc geninfo_unexecuted_blocks=1 00:23:12.840 00:23:12.840 ' 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:12.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.840 --rc genhtml_branch_coverage=1 00:23:12.840 --rc genhtml_function_coverage=1 00:23:12.840 --rc genhtml_legend=1 00:23:12.840 --rc geninfo_all_blocks=1 00:23:12.840 --rc geninfo_unexecuted_blocks=1 00:23:12.840 00:23:12.840 ' 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:12.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.840 --rc genhtml_branch_coverage=1 00:23:12.840 --rc genhtml_function_coverage=1 00:23:12.840 --rc genhtml_legend=1 00:23:12.840 --rc geninfo_all_blocks=1 00:23:12.840 --rc geninfo_unexecuted_blocks=1 00:23:12.840 00:23:12.840 ' 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # : 0 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:23:12.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@56 -- # have_pci_nics=0 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.840 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.841 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.841 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:12.841 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:12.841 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # xtrace_disable 00:23:12.841 11:58:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_devs=() 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_devs 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_net_devs=() 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # pci_drivers=() 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # local -A pci_drivers 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # net_devs=() 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga net_devs 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # e810=() 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga e810 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # x722=() 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga x722 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@323 -- # mlx=() 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@323 -- # local -ga mlx 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:20.987 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:20.987 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:20.987 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:20.987 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # is_hw=yes 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:23:20.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:23:20.987 00:23:20.987 --- 10.0.0.2 ping statistics --- 00:23:20.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.987 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:23:20.987 00:23:20.987 --- 10.0.0.1 ping statistics --- 00:23:20.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.987 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # return 0 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:20.987 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.988 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:20.988 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:20.988 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.988 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:20.988 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:20.988 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:20.988 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:20.988 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:20.988 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:20.988 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # nvmfpid=130091 00:23:20.988 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # waitforlisten 130091 00:23:20.988 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:20.988 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 130091 ']' 00:23:20.988 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.988 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.988 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.988 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.988 11:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:20.988 [2024-12-09 11:58:27.962779] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:23:20.988 [2024-12-09 11:58:27.962852] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.988 [2024-12-09 11:58:28.063762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:20.988 [2024-12-09 11:58:28.116375] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.988 [2024-12-09 11:58:28.116433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.988 [2024-12-09 11:58:28.116441] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.988 [2024-12-09 11:58:28.116449] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.988 [2024-12-09 11:58:28.116456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.988 [2024-12-09 11:58:28.118524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.988 [2024-12-09 11:58:28.118661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.988 [2024-12-09 11:58:28.118828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:20.988 [2024-12-09 11:58:28.118927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.988 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.988 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:20.988 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:20.988 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:20.988 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:20.988 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.988 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:20.988 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.988 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:20.988 [2024-12-09 11:58:28.812426] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.988 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.988 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:20.988 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.988 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:20.988 Malloc0 00:23:20.988 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.988 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:20.988 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.988 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:21.249 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.249 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:21.249 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.249 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:21.249 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.249 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:21.249 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.250 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:21.250 [2024-12-09 11:58:28.891700] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.250 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.250 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:21.250 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.250 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:21.250 [ 00:23:21.250 { 00:23:21.250 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:21.250 "subtype": "Discovery", 00:23:21.250 "listen_addresses": [], 00:23:21.250 "allow_any_host": true, 00:23:21.250 "hosts": [] 00:23:21.250 }, 00:23:21.250 { 00:23:21.250 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.250 "subtype": "NVMe", 00:23:21.250 "listen_addresses": [ 00:23:21.250 { 00:23:21.250 "trtype": "TCP", 00:23:21.250 "adrfam": "IPv4", 00:23:21.250 "traddr": "10.0.0.2", 00:23:21.250 "trsvcid": "4420" 00:23:21.250 } 00:23:21.250 ], 00:23:21.250 "allow_any_host": true, 00:23:21.250 "hosts": [], 00:23:21.250 "serial_number": "SPDK00000000000001", 00:23:21.250 "model_number": "SPDK bdev Controller", 00:23:21.250 "max_namespaces": 2, 00:23:21.250 "min_cntlid": 1, 00:23:21.250 "max_cntlid": 65519, 00:23:21.250 "namespaces": [ 00:23:21.250 { 00:23:21.250 "nsid": 1, 00:23:21.250 "bdev_name": "Malloc0", 00:23:21.250 "name": "Malloc0", 00:23:21.250 "nguid": "DCA6854946DC45D7B38CE88B7BFDD430", 00:23:21.250 "uuid": "dca68549-46dc-45d7-b38c-e88b7bfdd430" 00:23:21.250 } 00:23:21.250 ] 00:23:21.250 } 00:23:21.250 ] 00:23:21.250 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.250 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:21.250 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:21.250 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=130169 00:23:21.250 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:21.250 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:21.250 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:21.250 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:21.250 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:21.250 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:21.250 11:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:21.250 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:21.250 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:21.250 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:21.250 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:21.250 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:21.250 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:21.250 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:21.250 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:21.250 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.250 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:21.512 Malloc1 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:21.512 Asynchronous Event Request test 00:23:21.512 Attaching to 10.0.0.2 00:23:21.512 Attached to 10.0.0.2 00:23:21.512 Registering asynchronous event callbacks... 00:23:21.512 Starting namespace attribute notice tests for all controllers... 00:23:21.512 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:21.512 aer_cb - Changed Namespace 00:23:21.512 Cleaning up... 00:23:21.512 [ 00:23:21.512 { 00:23:21.512 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:21.512 "subtype": "Discovery", 00:23:21.512 "listen_addresses": [], 00:23:21.512 "allow_any_host": true, 00:23:21.512 "hosts": [] 00:23:21.512 }, 00:23:21.512 { 00:23:21.512 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.512 "subtype": "NVMe", 00:23:21.512 "listen_addresses": [ 00:23:21.512 { 00:23:21.512 "trtype": "TCP", 00:23:21.512 "adrfam": "IPv4", 00:23:21.512 "traddr": "10.0.0.2", 00:23:21.512 "trsvcid": "4420" 00:23:21.512 } 00:23:21.512 ], 00:23:21.512 "allow_any_host": true, 00:23:21.512 "hosts": [], 00:23:21.512 "serial_number": "SPDK00000000000001", 00:23:21.512 "model_number": "SPDK bdev Controller", 00:23:21.512 "max_namespaces": 2, 00:23:21.512 "min_cntlid": 1, 00:23:21.512 "max_cntlid": 65519, 00:23:21.512 "namespaces": [ 00:23:21.512 { 00:23:21.512 "nsid": 1, 00:23:21.512 "bdev_name": "Malloc0", 00:23:21.512 "name": "Malloc0", 00:23:21.512 "nguid": "DCA6854946DC45D7B38CE88B7BFDD430", 00:23:21.512 "uuid": "dca68549-46dc-45d7-b38c-e88b7bfdd430" 00:23:21.512 }, 00:23:21.512 { 00:23:21.512 "nsid": 2, 00:23:21.512 "bdev_name": "Malloc1", 00:23:21.512 "name": "Malloc1", 00:23:21.512 "nguid": "71A429532759449F8138C2C2251FB176", 00:23:21.512 "uuid": "71a42953-2759-449f-8138-c2c2251fb176" 00:23:21.512 } 00:23:21.512 ] 00:23:21.512 } 00:23:21.512 ] 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 130169 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # sync 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # set +e 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # for i in {1..20} 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:23:21.512 rmmod nvme_tcp 00:23:21.512 rmmod nvme_fabrics 00:23:21.512 rmmod nvme_keyring 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # set -e 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@130 -- # return 0 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@513 -- # '[' -n 130091 ']' 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # killprocess 130091 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 130091 ']' 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 130091 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.512 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 130091 00:23:21.773 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:21.773 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:21.773 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 130091' 00:23:21.773 killing process with pid 130091 00:23:21.773 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 130091 00:23:21.773 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 130091 00:23:21.773 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:21.773 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:21.773 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:21.773 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # iptr 00:23:21.773 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:21.773 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-save 00:23:21.773 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-restore 00:23:21.773 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:21.773 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # remove_spdk_ns 00:23:21.773 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.773 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.773 11:58:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.895 11:58:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:23:23.895 00:23:23.895 real 0m11.412s 00:23:23.895 user 0m7.847s 00:23:23.895 sys 0m6.144s 00:23:23.895 11:58:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:23.895 11:58:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:23.895 ************************************ 00:23:23.895 END TEST nvmf_aer 00:23:23.895 ************************************ 00:23:23.895 11:58:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:23.895 11:58:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:23.895 11:58:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.895 11:58:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.895 ************************************ 00:23:23.895 START TEST nvmf_async_init 00:23:23.895 ************************************ 00:23:23.895 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:24.156 * Looking for test storage... 00:23:24.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:24.156 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:24.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.157 --rc genhtml_branch_coverage=1 00:23:24.157 --rc genhtml_function_coverage=1 00:23:24.157 --rc genhtml_legend=1 00:23:24.157 --rc geninfo_all_blocks=1 00:23:24.157 --rc geninfo_unexecuted_blocks=1 00:23:24.157 00:23:24.157 ' 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:24.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.157 --rc genhtml_branch_coverage=1 00:23:24.157 --rc genhtml_function_coverage=1 00:23:24.157 --rc genhtml_legend=1 00:23:24.157 --rc geninfo_all_blocks=1 00:23:24.157 --rc geninfo_unexecuted_blocks=1 00:23:24.157 00:23:24.157 ' 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:24.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.157 --rc genhtml_branch_coverage=1 00:23:24.157 --rc genhtml_function_coverage=1 00:23:24.157 --rc genhtml_legend=1 00:23:24.157 --rc geninfo_all_blocks=1 00:23:24.157 --rc geninfo_unexecuted_blocks=1 00:23:24.157 00:23:24.157 ' 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:24.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.157 --rc genhtml_branch_coverage=1 00:23:24.157 --rc genhtml_function_coverage=1 00:23:24.157 --rc genhtml_legend=1 00:23:24.157 --rc geninfo_all_blocks=1 00:23:24.157 --rc geninfo_unexecuted_blocks=1 00:23:24.157 00:23:24.157 ' 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # : 0 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:23:24.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@56 -- # have_pci_nics=0 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=f2f9764f2df3440c96f74aeef9d65f13 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # xtrace_disable 00:23:24.157 11:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:32.285 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.285 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_devs=() 00:23:32.285 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_devs 00:23:32.285 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_net_devs=() 00:23:32.285 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:23:32.285 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # pci_drivers=() 00:23:32.285 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # local -A pci_drivers 00:23:32.285 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # net_devs=() 00:23:32.285 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga net_devs 00:23:32.285 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # e810=() 00:23:32.285 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga e810 00:23:32.285 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # x722=() 00:23:32.285 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga x722 00:23:32.285 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@323 -- # mlx=() 00:23:32.285 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@323 -- # local -ga mlx 00:23:32.285 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.285 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.285 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.285 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:32.286 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:32.286 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:32.286 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:32.286 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # is_hw=yes 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:23:32.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:23:32.286 00:23:32.286 --- 10.0.0.2 ping statistics --- 00:23:32.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.286 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:23:32.286 00:23:32.286 --- 10.0.0.1 ping statistics --- 00:23:32.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.286 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # return 0 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # nvmfpid=134503 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # waitforlisten 134503 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 134503 ']' 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.286 11:58:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:32.286 [2024-12-09 11:58:39.486992] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:23:32.286 [2024-12-09 11:58:39.487061] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.287 [2024-12-09 11:58:39.583434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.287 [2024-12-09 11:58:39.633768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.287 [2024-12-09 11:58:39.633822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.287 [2024-12-09 11:58:39.633831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.287 [2024-12-09 11:58:39.633838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.287 [2024-12-09 11:58:39.633844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.287 [2024-12-09 11:58:39.634608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:32.547 [2024-12-09 11:58:40.349546] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:32.547 null0 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f2f9764f2df3440c96f74aeef9d65f13 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:32.547 [2024-12-09 11:58:40.409887] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.547 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:32.808 nvme0n1 00:23:32.808 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.808 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:32.808 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.808 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:32.808 [ 00:23:32.808 { 00:23:32.808 "name": "nvme0n1", 00:23:32.808 "aliases": [ 00:23:32.808 "f2f9764f-2df3-440c-96f7-4aeef9d65f13" 00:23:32.808 ], 00:23:32.808 "product_name": "NVMe disk", 00:23:32.808 "block_size": 512, 00:23:32.808 "num_blocks": 2097152, 00:23:32.808 "uuid": "f2f9764f-2df3-440c-96f7-4aeef9d65f13", 00:23:32.808 "numa_id": 0, 00:23:32.808 "assigned_rate_limits": { 00:23:32.808 "rw_ios_per_sec": 0, 00:23:32.808 "rw_mbytes_per_sec": 0, 00:23:32.808 "r_mbytes_per_sec": 0, 00:23:32.808 "w_mbytes_per_sec": 0 00:23:32.808 }, 00:23:32.808 "claimed": false, 00:23:32.808 "zoned": false, 00:23:32.808 "supported_io_types": { 00:23:32.808 "read": true, 00:23:32.808 "write": true, 00:23:32.808 "unmap": false, 00:23:32.808 "flush": true, 00:23:32.808 "reset": true, 00:23:32.808 "nvme_admin": true, 00:23:32.808 "nvme_io": true, 00:23:32.808 "nvme_io_md": false, 00:23:32.808 "write_zeroes": true, 00:23:32.808 "zcopy": false, 00:23:32.808 "get_zone_info": false, 00:23:32.808 "zone_management": false, 00:23:32.808 "zone_append": false, 00:23:32.808 "compare": true, 00:23:32.808 "compare_and_write": true, 00:23:32.808 "abort": true, 00:23:32.808 "seek_hole": false, 00:23:32.808 "seek_data": false, 00:23:32.808 "copy": true, 00:23:32.808 "nvme_iov_md": false 00:23:32.808 }, 00:23:32.808 "memory_domains": [ 00:23:32.808 { 00:23:32.808 "dma_device_id": "system", 00:23:32.808 "dma_device_type": 1 00:23:32.808 } 00:23:32.808 ], 00:23:32.808 "driver_specific": { 00:23:32.808 "nvme": [ 00:23:32.808 { 00:23:32.808 "trid": { 00:23:32.808 "trtype": "TCP", 00:23:32.808 "adrfam": "IPv4", 00:23:32.808 "traddr": "10.0.0.2", 00:23:32.808 "trsvcid": "4420", 00:23:32.808 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:32.808 }, 00:23:32.808 "ctrlr_data": { 00:23:32.808 "cntlid": 1, 00:23:32.808 "vendor_id": "0x8086", 00:23:32.808 "model_number": "SPDK bdev Controller", 00:23:32.808 "serial_number": "00000000000000000000", 00:23:32.808 "firmware_revision": "25.01", 00:23:32.808 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:32.808 "oacs": { 00:23:32.808 "security": 0, 00:23:32.808 "format": 0, 00:23:32.808 "firmware": 0, 00:23:32.808 "ns_manage": 0 00:23:32.808 }, 00:23:32.808 "multi_ctrlr": true, 00:23:32.808 "ana_reporting": false 00:23:32.808 }, 00:23:32.808 "vs": { 00:23:32.808 "nvme_version": "1.3" 00:23:32.808 }, 00:23:32.808 "ns_data": { 00:23:32.808 "id": 1, 00:23:32.808 "can_share": true 00:23:32.808 } 00:23:32.808 } 00:23:32.808 ], 00:23:32.808 "mp_policy": "active_passive" 00:23:32.808 } 00:23:32.808 } 00:23:32.808 ] 00:23:32.808 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.808 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:32.808 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.808 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:32.808 [2024-12-09 11:58:40.686412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:32.808 [2024-12-09 11:58:40.686505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2476880 (9): Bad file descriptor 00:23:33.068 [2024-12-09 11:58:40.818745] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:33.068 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.068 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:33.068 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.068 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:33.068 [ 00:23:33.068 { 00:23:33.068 "name": "nvme0n1", 00:23:33.068 "aliases": [ 00:23:33.068 "f2f9764f-2df3-440c-96f7-4aeef9d65f13" 00:23:33.068 ], 00:23:33.068 "product_name": "NVMe disk", 00:23:33.068 "block_size": 512, 00:23:33.068 "num_blocks": 2097152, 00:23:33.068 "uuid": "f2f9764f-2df3-440c-96f7-4aeef9d65f13", 00:23:33.068 "numa_id": 0, 00:23:33.068 "assigned_rate_limits": { 00:23:33.068 "rw_ios_per_sec": 0, 00:23:33.068 "rw_mbytes_per_sec": 0, 00:23:33.068 "r_mbytes_per_sec": 0, 00:23:33.068 "w_mbytes_per_sec": 0 00:23:33.068 }, 00:23:33.068 "claimed": false, 00:23:33.068 "zoned": false, 00:23:33.068 "supported_io_types": { 00:23:33.068 "read": true, 00:23:33.068 "write": true, 00:23:33.068 "unmap": false, 00:23:33.068 "flush": true, 00:23:33.068 "reset": true, 00:23:33.068 "nvme_admin": true, 00:23:33.068 "nvme_io": true, 00:23:33.068 "nvme_io_md": false, 00:23:33.068 "write_zeroes": true, 00:23:33.068 "zcopy": false, 00:23:33.068 "get_zone_info": false, 00:23:33.068 "zone_management": false, 00:23:33.068 "zone_append": false, 00:23:33.068 "compare": true, 00:23:33.068 "compare_and_write": true, 00:23:33.068 "abort": true, 00:23:33.068 "seek_hole": false, 00:23:33.069 "seek_data": false, 00:23:33.069 "copy": true, 00:23:33.069 "nvme_iov_md": false 00:23:33.069 }, 00:23:33.069 "memory_domains": [ 00:23:33.069 { 00:23:33.069 "dma_device_id": "system", 00:23:33.069 "dma_device_type": 1 00:23:33.069 } 00:23:33.069 ], 00:23:33.069 "driver_specific": { 00:23:33.069 "nvme": [ 00:23:33.069 { 00:23:33.069 "trid": { 00:23:33.069 "trtype": "TCP", 00:23:33.069 "adrfam": "IPv4", 00:23:33.069 "traddr": "10.0.0.2", 00:23:33.069 "trsvcid": "4420", 00:23:33.069 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:33.069 }, 00:23:33.069 "ctrlr_data": { 00:23:33.069 "cntlid": 2, 00:23:33.069 "vendor_id": "0x8086", 00:23:33.069 "model_number": "SPDK bdev Controller", 00:23:33.069 "serial_number": "00000000000000000000", 00:23:33.069 "firmware_revision": "25.01", 00:23:33.069 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:33.069 "oacs": { 00:23:33.069 "security": 0, 00:23:33.069 "format": 0, 00:23:33.069 "firmware": 0, 00:23:33.069 "ns_manage": 0 00:23:33.069 }, 00:23:33.069 "multi_ctrlr": true, 00:23:33.069 "ana_reporting": false 00:23:33.069 }, 00:23:33.069 "vs": { 00:23:33.069 "nvme_version": "1.3" 00:23:33.069 }, 00:23:33.069 "ns_data": { 00:23:33.069 "id": 1, 00:23:33.069 "can_share": true 00:23:33.069 } 00:23:33.069 } 00:23:33.069 ], 00:23:33.069 "mp_policy": "active_passive" 00:23:33.069 } 00:23:33.069 } 00:23:33.069 ] 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.rlSk7v9a3t 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.rlSk7v9a3t 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.rlSk7v9a3t 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:33.069 [2024-12-09 11:58:40.907132] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:33.069 [2024-12-09 11:58:40.907287] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.069 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:33.069 [2024-12-09 11:58:40.931215] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:33.329 nvme0n1 00:23:33.329 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.329 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:33.329 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.329 11:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:33.329 [ 00:23:33.329 { 00:23:33.329 "name": "nvme0n1", 00:23:33.329 "aliases": [ 00:23:33.329 "f2f9764f-2df3-440c-96f7-4aeef9d65f13" 00:23:33.329 ], 00:23:33.329 "product_name": "NVMe disk", 00:23:33.329 "block_size": 512, 00:23:33.329 "num_blocks": 2097152, 00:23:33.329 "uuid": "f2f9764f-2df3-440c-96f7-4aeef9d65f13", 00:23:33.329 "numa_id": 0, 00:23:33.329 "assigned_rate_limits": { 00:23:33.329 "rw_ios_per_sec": 0, 00:23:33.329 "rw_mbytes_per_sec": 0, 00:23:33.329 "r_mbytes_per_sec": 0, 00:23:33.329 "w_mbytes_per_sec": 0 00:23:33.329 }, 00:23:33.329 "claimed": false, 00:23:33.329 "zoned": false, 00:23:33.329 "supported_io_types": { 00:23:33.329 "read": true, 00:23:33.329 "write": true, 00:23:33.329 "unmap": false, 00:23:33.329 "flush": true, 00:23:33.329 "reset": true, 00:23:33.329 "nvme_admin": true, 00:23:33.329 "nvme_io": true, 00:23:33.329 "nvme_io_md": false, 00:23:33.329 "write_zeroes": true, 00:23:33.329 "zcopy": false, 00:23:33.329 "get_zone_info": false, 00:23:33.329 "zone_management": false, 00:23:33.329 "zone_append": false, 00:23:33.329 "compare": true, 00:23:33.329 "compare_and_write": true, 00:23:33.329 "abort": true, 00:23:33.329 "seek_hole": false, 00:23:33.329 "seek_data": false, 00:23:33.329 "copy": true, 00:23:33.329 "nvme_iov_md": false 00:23:33.329 }, 00:23:33.329 "memory_domains": [ 00:23:33.329 { 00:23:33.329 "dma_device_id": "system", 00:23:33.329 "dma_device_type": 1 00:23:33.329 } 00:23:33.329 ], 00:23:33.329 "driver_specific": { 00:23:33.329 "nvme": [ 00:23:33.329 { 00:23:33.329 "trid": { 00:23:33.329 "trtype": "TCP", 00:23:33.329 "adrfam": "IPv4", 00:23:33.329 "traddr": "10.0.0.2", 00:23:33.329 "trsvcid": "4421", 00:23:33.329 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:33.329 }, 00:23:33.329 "ctrlr_data": { 00:23:33.329 "cntlid": 3, 00:23:33.329 "vendor_id": "0x8086", 00:23:33.329 "model_number": "SPDK bdev Controller", 00:23:33.329 "serial_number": "00000000000000000000", 00:23:33.329 "firmware_revision": "25.01", 00:23:33.329 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:33.329 "oacs": { 00:23:33.329 "security": 0, 00:23:33.329 "format": 0, 00:23:33.329 "firmware": 0, 00:23:33.329 "ns_manage": 0 00:23:33.330 }, 00:23:33.330 "multi_ctrlr": true, 00:23:33.330 "ana_reporting": false 00:23:33.330 }, 00:23:33.330 "vs": { 00:23:33.330 "nvme_version": "1.3" 00:23:33.330 }, 00:23:33.330 "ns_data": { 00:23:33.330 "id": 1, 00:23:33.330 "can_share": true 00:23:33.330 } 00:23:33.330 } 00:23:33.330 ], 00:23:33.330 "mp_policy": "active_passive" 00:23:33.330 } 00:23:33.330 } 00:23:33.330 ] 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.rlSk7v9a3t 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # sync 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # set +e 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # for i in {1..20} 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:23:33.330 rmmod nvme_tcp 00:23:33.330 rmmod nvme_fabrics 00:23:33.330 rmmod nvme_keyring 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # set -e 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@130 -- # return 0 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@513 -- # '[' -n 134503 ']' 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # killprocess 134503 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 134503 ']' 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 134503 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 134503 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 134503' 00:23:33.330 killing process with pid 134503 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 134503 00:23:33.330 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 134503 00:23:33.590 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:33.590 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:33.590 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:33.590 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # iptr 00:23:33.590 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-save 00:23:33.590 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:33.590 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-restore 00:23:33.590 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:33.590 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # remove_spdk_ns 00:23:33.590 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.590 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.590 11:58:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:23:36.134 00:23:36.134 real 0m11.729s 00:23:36.134 user 0m4.200s 00:23:36.134 sys 0m6.100s 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.134 ************************************ 00:23:36.134 END TEST nvmf_async_init 00:23:36.134 ************************************ 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.134 ************************************ 00:23:36.134 START TEST dma 00:23:36.134 ************************************ 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:36.134 * Looking for test storage... 00:23:36.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:36.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.134 --rc genhtml_branch_coverage=1 00:23:36.134 --rc genhtml_function_coverage=1 00:23:36.134 --rc genhtml_legend=1 00:23:36.134 --rc geninfo_all_blocks=1 00:23:36.134 --rc geninfo_unexecuted_blocks=1 00:23:36.134 00:23:36.134 ' 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:36.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.134 --rc genhtml_branch_coverage=1 00:23:36.134 --rc genhtml_function_coverage=1 00:23:36.134 --rc genhtml_legend=1 00:23:36.134 --rc geninfo_all_blocks=1 00:23:36.134 --rc geninfo_unexecuted_blocks=1 00:23:36.134 00:23:36.134 ' 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:36.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.134 --rc genhtml_branch_coverage=1 00:23:36.134 --rc genhtml_function_coverage=1 00:23:36.134 --rc genhtml_legend=1 00:23:36.134 --rc geninfo_all_blocks=1 00:23:36.134 --rc geninfo_unexecuted_blocks=1 00:23:36.134 00:23:36.134 ' 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:36.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.134 --rc genhtml_branch_coverage=1 00:23:36.134 --rc genhtml_function_coverage=1 00:23:36.134 --rc genhtml_legend=1 00:23:36.134 --rc geninfo_all_blocks=1 00:23:36.134 --rc geninfo_unexecuted_blocks=1 00:23:36.134 00:23:36.134 ' 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # : 0 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.134 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:23:36.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@56 -- # have_pci_nics=0 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:36.135 00:23:36.135 real 0m0.203s 00:23:36.135 user 0m0.114s 00:23:36.135 sys 0m0.103s 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:36.135 ************************************ 00:23:36.135 END TEST dma 00:23:36.135 ************************************ 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.135 ************************************ 00:23:36.135 START TEST nvmf_identify 00:23:36.135 ************************************ 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:36.135 * Looking for test storage... 00:23:36.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:36.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.135 --rc genhtml_branch_coverage=1 00:23:36.135 --rc genhtml_function_coverage=1 00:23:36.135 --rc genhtml_legend=1 00:23:36.135 --rc geninfo_all_blocks=1 00:23:36.135 --rc geninfo_unexecuted_blocks=1 00:23:36.135 00:23:36.135 ' 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:36.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.135 --rc genhtml_branch_coverage=1 00:23:36.135 --rc genhtml_function_coverage=1 00:23:36.135 --rc genhtml_legend=1 00:23:36.135 --rc geninfo_all_blocks=1 00:23:36.135 --rc geninfo_unexecuted_blocks=1 00:23:36.135 00:23:36.135 ' 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:36.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.135 --rc genhtml_branch_coverage=1 00:23:36.135 --rc genhtml_function_coverage=1 00:23:36.135 --rc genhtml_legend=1 00:23:36.135 --rc geninfo_all_blocks=1 00:23:36.135 --rc geninfo_unexecuted_blocks=1 00:23:36.135 00:23:36.135 ' 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:36.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.135 --rc genhtml_branch_coverage=1 00:23:36.135 --rc genhtml_function_coverage=1 00:23:36.135 --rc genhtml_legend=1 00:23:36.135 --rc geninfo_all_blocks=1 00:23:36.135 --rc geninfo_unexecuted_blocks=1 00:23:36.135 00:23:36.135 ' 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:36.135 11:58:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:36.135 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.135 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.135 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.135 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.135 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.135 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.135 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.135 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.135 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.135 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.135 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:36.135 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:36.135 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.135 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.135 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:36.135 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.135 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:23:36.135 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:36.135 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:36.396 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.396 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.396 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # : 0 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:23:36.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@56 -- # have_pci_nics=0 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # xtrace_disable 00:23:36.397 11:58:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_devs=() 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_devs 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_net_devs=() 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # pci_drivers=() 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # local -A pci_drivers 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # net_devs=() 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga net_devs 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # e810=() 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga e810 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # x722=() 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga x722 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@323 -- # mlx=() 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@323 -- # local -ga mlx 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:44.537 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:44.537 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:23:44.537 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:44.538 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:44.538 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # is_hw=yes 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:23:44.538 11:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:23:44.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.538 ms 00:23:44.538 00:23:44.538 --- 10.0.0.2 ping statistics --- 00:23:44.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.538 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:23:44.538 00:23:44.538 --- 10.0.0.1 ping statistics --- 00:23:44.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.538 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # return 0 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=139201 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 139201 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 139201 ']' 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.538 11:58:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.538 [2024-12-09 11:58:51.393284] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:23:44.538 [2024-12-09 11:58:51.393352] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.538 [2024-12-09 11:58:51.490990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:44.538 [2024-12-09 11:58:51.545012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.538 [2024-12-09 11:58:51.545069] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.538 [2024-12-09 11:58:51.545082] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.538 [2024-12-09 11:58:51.545089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.538 [2024-12-09 11:58:51.545095] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.538 [2024-12-09 11:58:51.547485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.538 [2024-12-09 11:58:51.547616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.538 [2024-12-09 11:58:51.547787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:44.538 [2024-12-09 11:58:51.547997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.538 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.538 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:23:44.538 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:44.538 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.538 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.538 [2024-12-09 11:58:52.213528] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.538 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.538 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:44.538 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:44.538 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.538 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:44.538 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.538 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.538 Malloc0 00:23:44.538 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.538 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:44.538 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.538 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.538 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.538 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:44.538 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.538 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.538 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.539 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:44.539 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.539 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.539 [2024-12-09 11:58:52.330485] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.539 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.539 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:44.539 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.539 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.539 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.539 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:44.539 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.539 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.539 [ 00:23:44.539 { 00:23:44.539 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:44.539 "subtype": "Discovery", 00:23:44.539 "listen_addresses": [ 00:23:44.539 { 00:23:44.539 "trtype": "TCP", 00:23:44.539 "adrfam": "IPv4", 00:23:44.539 "traddr": "10.0.0.2", 00:23:44.539 "trsvcid": "4420" 00:23:44.539 } 00:23:44.539 ], 00:23:44.539 "allow_any_host": true, 00:23:44.539 "hosts": [] 00:23:44.539 }, 00:23:44.539 { 00:23:44.539 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.539 "subtype": "NVMe", 00:23:44.539 "listen_addresses": [ 00:23:44.539 { 00:23:44.539 "trtype": "TCP", 00:23:44.539 "adrfam": "IPv4", 00:23:44.539 "traddr": "10.0.0.2", 00:23:44.539 "trsvcid": "4420" 00:23:44.539 } 00:23:44.539 ], 00:23:44.539 "allow_any_host": true, 00:23:44.539 "hosts": [], 00:23:44.539 "serial_number": "SPDK00000000000001", 00:23:44.539 "model_number": "SPDK bdev Controller", 00:23:44.539 "max_namespaces": 32, 00:23:44.539 "min_cntlid": 1, 00:23:44.539 "max_cntlid": 65519, 00:23:44.539 "namespaces": [ 00:23:44.539 { 00:23:44.539 "nsid": 1, 00:23:44.539 "bdev_name": "Malloc0", 00:23:44.539 "name": "Malloc0", 00:23:44.539 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:44.539 "eui64": "ABCDEF0123456789", 00:23:44.539 "uuid": "0ca9d12e-17c0-497d-91d4-96f817fe5307" 00:23:44.539 } 00:23:44.539 ] 00:23:44.539 } 00:23:44.539 ] 00:23:44.539 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.539 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:44.539 [2024-12-09 11:58:52.394080] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:23:44.539 [2024-12-09 11:58:52.394122] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139267 ] 00:23:44.803 [2024-12-09 11:58:52.448801] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:44.803 [2024-12-09 11:58:52.448854] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:44.803 [2024-12-09 11:58:52.448860] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:44.803 [2024-12-09 11:58:52.448876] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:44.803 [2024-12-09 11:58:52.448885] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:44.803 [2024-12-09 11:58:52.452896] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:44.804 [2024-12-09 11:58:52.452930] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ee3690 0 00:23:44.804 [2024-12-09 11:58:52.460649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:44.804 [2024-12-09 11:58:52.460662] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:44.804 [2024-12-09 11:58:52.460670] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:44.804 [2024-12-09 11:58:52.460674] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:44.804 [2024-12-09 11:58:52.460710] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.460717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.460721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee3690) 00:23:44.804 [2024-12-09 11:58:52.460735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:44.804 [2024-12-09 11:58:52.460754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45100, cid 0, qid 0 00:23:44.804 [2024-12-09 11:58:52.468649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.804 [2024-12-09 11:58:52.468658] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.804 [2024-12-09 11:58:52.468662] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.468667] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45100) on tqpair=0x1ee3690 00:23:44.804 [2024-12-09 11:58:52.468679] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:44.804 [2024-12-09 11:58:52.468690] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:44.804 [2024-12-09 11:58:52.468696] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:44.804 [2024-12-09 11:58:52.468712] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.468716] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.468720] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee3690) 00:23:44.804 [2024-12-09 11:58:52.468727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.804 [2024-12-09 11:58:52.468741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45100, cid 0, qid 0 00:23:44.804 [2024-12-09 11:58:52.468917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.804 [2024-12-09 11:58:52.468924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.804 [2024-12-09 11:58:52.468927] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.468931] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45100) on tqpair=0x1ee3690 00:23:44.804 [2024-12-09 11:58:52.468939] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:44.804 [2024-12-09 11:58:52.468947] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:44.804 [2024-12-09 11:58:52.468954] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.468957] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.468961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee3690) 00:23:44.804 [2024-12-09 11:58:52.468968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.804 [2024-12-09 11:58:52.468979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45100, cid 0, qid 0 00:23:44.804 [2024-12-09 11:58:52.469189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.804 [2024-12-09 11:58:52.469197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.804 [2024-12-09 11:58:52.469202] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.469206] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45100) on tqpair=0x1ee3690 00:23:44.804 [2024-12-09 11:58:52.469212] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:44.804 [2024-12-09 11:58:52.469220] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:44.804 [2024-12-09 11:58:52.469227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.469231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.469234] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee3690) 00:23:44.804 [2024-12-09 11:58:52.469241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.804 [2024-12-09 11:58:52.469251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45100, cid 0, qid 0 00:23:44.804 [2024-12-09 11:58:52.469388] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.804 [2024-12-09 11:58:52.469395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.804 [2024-12-09 11:58:52.469398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.469402] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45100) on tqpair=0x1ee3690 00:23:44.804 [2024-12-09 11:58:52.469407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:44.804 [2024-12-09 11:58:52.469419] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.469423] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.469427] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee3690) 00:23:44.804 [2024-12-09 11:58:52.469434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.804 [2024-12-09 11:58:52.469444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45100, cid 0, qid 0 00:23:44.804 [2024-12-09 11:58:52.469646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.804 [2024-12-09 11:58:52.469653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.804 [2024-12-09 11:58:52.469657] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.469660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45100) on tqpair=0x1ee3690 00:23:44.804 [2024-12-09 11:58:52.469665] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:44.804 [2024-12-09 11:58:52.469671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:44.804 [2024-12-09 11:58:52.469678] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:44.804 [2024-12-09 11:58:52.469789] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:44.804 [2024-12-09 11:58:52.469794] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:44.804 [2024-12-09 11:58:52.469803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.469807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.469810] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee3690) 00:23:44.804 [2024-12-09 11:58:52.469817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.804 [2024-12-09 11:58:52.469828] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45100, cid 0, qid 0 00:23:44.804 [2024-12-09 11:58:52.470017] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.804 [2024-12-09 11:58:52.470023] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.804 [2024-12-09 11:58:52.470027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.470031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45100) on tqpair=0x1ee3690 00:23:44.804 [2024-12-09 11:58:52.470035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:44.804 [2024-12-09 11:58:52.470045] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.470049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.470052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee3690) 00:23:44.804 [2024-12-09 11:58:52.470059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.804 [2024-12-09 11:58:52.470069] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45100, cid 0, qid 0 00:23:44.804 [2024-12-09 11:58:52.470245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.804 [2024-12-09 11:58:52.470252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.804 [2024-12-09 11:58:52.470255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.470261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45100) on tqpair=0x1ee3690 00:23:44.804 [2024-12-09 11:58:52.470266] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:44.804 [2024-12-09 11:58:52.470271] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:44.804 [2024-12-09 11:58:52.470279] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:44.804 [2024-12-09 11:58:52.470286] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:44.804 [2024-12-09 11:58:52.470296] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.470300] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee3690) 00:23:44.804 [2024-12-09 11:58:52.470307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.804 [2024-12-09 11:58:52.470317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45100, cid 0, qid 0 00:23:44.804 [2024-12-09 11:58:52.470466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.804 [2024-12-09 11:58:52.470473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.804 [2024-12-09 11:58:52.470477] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.470481] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ee3690): datao=0, datal=4096, cccid=0 00:23:44.804 [2024-12-09 11:58:52.470486] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f45100) on tqpair(0x1ee3690): expected_datao=0, payload_size=4096 00:23:44.804 [2024-12-09 11:58:52.470490] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.470498] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.470502] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:44.804 [2024-12-09 11:58:52.470649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.804 [2024-12-09 11:58:52.470656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.804 [2024-12-09 11:58:52.470659] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.470663] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45100) on tqpair=0x1ee3690 00:23:44.805 [2024-12-09 11:58:52.470674] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:44.805 [2024-12-09 11:58:52.470680] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:44.805 [2024-12-09 11:58:52.470684] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:44.805 [2024-12-09 11:58:52.470689] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:44.805 [2024-12-09 11:58:52.470694] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:44.805 [2024-12-09 11:58:52.470700] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:44.805 [2024-12-09 11:58:52.470709] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:44.805 [2024-12-09 11:58:52.470719] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.470724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.470728] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee3690) 00:23:44.805 [2024-12-09 11:58:52.470736] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:44.805 [2024-12-09 11:58:52.470753] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45100, cid 0, qid 0 00:23:44.805 [2024-12-09 11:58:52.470951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.805 [2024-12-09 11:58:52.470957] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.805 [2024-12-09 11:58:52.470961] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.470965] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45100) on tqpair=0x1ee3690 00:23:44.805 [2024-12-09 11:58:52.470973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.470977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.470980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee3690) 00:23:44.805 [2024-12-09 11:58:52.470986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.805 [2024-12-09 11:58:52.470993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.470996] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.471000] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ee3690) 00:23:44.805 [2024-12-09 11:58:52.471006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.805 [2024-12-09 11:58:52.471012] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.471016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.471019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ee3690) 00:23:44.805 [2024-12-09 11:58:52.471025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.805 [2024-12-09 11:58:52.471031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.471034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.471038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee3690) 00:23:44.805 [2024-12-09 11:58:52.471044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.805 [2024-12-09 11:58:52.471048] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:44.805 [2024-12-09 11:58:52.471059] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:44.805 [2024-12-09 11:58:52.471065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.471069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ee3690) 00:23:44.805 [2024-12-09 11:58:52.471076] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.805 [2024-12-09 11:58:52.471087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45100, cid 0, qid 0 00:23:44.805 [2024-12-09 11:58:52.471092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45280, cid 1, qid 0 00:23:44.805 [2024-12-09 11:58:52.471097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45400, cid 2, qid 0 00:23:44.805 [2024-12-09 11:58:52.471102] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45580, cid 3, qid 0 00:23:44.805 [2024-12-09 11:58:52.471107] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45700, cid 4, qid 0 00:23:44.805 [2024-12-09 11:58:52.471375] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.805 [2024-12-09 11:58:52.471381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.805 [2024-12-09 11:58:52.471387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.471391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45700) on tqpair=0x1ee3690 00:23:44.805 [2024-12-09 11:58:52.471396] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:44.805 [2024-12-09 11:58:52.471402] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:44.805 [2024-12-09 11:58:52.471412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.471416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ee3690) 00:23:44.805 [2024-12-09 11:58:52.471423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.805 [2024-12-09 11:58:52.471433] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45700, cid 4, qid 0 00:23:44.805 [2024-12-09 11:58:52.471529] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.805 [2024-12-09 11:58:52.471536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.805 [2024-12-09 11:58:52.471540] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.471543] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ee3690): datao=0, datal=4096, cccid=4 00:23:44.805 [2024-12-09 11:58:52.471548] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f45700) on tqpair(0x1ee3690): expected_datao=0, payload_size=4096 00:23:44.805 [2024-12-09 11:58:52.471552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.471574] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.471578] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.471776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.805 [2024-12-09 11:58:52.471782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.805 [2024-12-09 11:58:52.471786] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.471790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45700) on tqpair=0x1ee3690 00:23:44.805 [2024-12-09 11:58:52.471801] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:44.805 [2024-12-09 11:58:52.471823] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.471828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ee3690) 00:23:44.805 [2024-12-09 11:58:52.471834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.805 [2024-12-09 11:58:52.471843] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.471847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.471851] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ee3690) 00:23:44.805 [2024-12-09 11:58:52.471857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.805 [2024-12-09 11:58:52.471870] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45700, cid 4, qid 0 00:23:44.805 [2024-12-09 11:58:52.471876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45880, cid 5, qid 0 00:23:44.805 [2024-12-09 11:58:52.472127] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.805 [2024-12-09 11:58:52.472134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.805 [2024-12-09 11:58:52.472138] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.472141] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ee3690): datao=0, datal=1024, cccid=4 00:23:44.805 [2024-12-09 11:58:52.472146] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f45700) on tqpair(0x1ee3690): expected_datao=0, payload_size=1024 00:23:44.805 [2024-12-09 11:58:52.472152] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.805 [2024-12-09 11:58:52.472159] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.806 [2024-12-09 11:58:52.472163] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:44.806 [2024-12-09 11:58:52.472170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.806 [2024-12-09 11:58:52.472176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.806 [2024-12-09 11:58:52.472179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.806 [2024-12-09 11:58:52.472183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45880) on tqpair=0x1ee3690 00:23:44.806 [2024-12-09 11:58:52.516646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.806 [2024-12-09 11:58:52.516656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.806 [2024-12-09 11:58:52.516660] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.806 [2024-12-09 11:58:52.516664] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45700) on tqpair=0x1ee3690 00:23:44.806 [2024-12-09 11:58:52.516675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.806 [2024-12-09 11:58:52.516679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ee3690) 00:23:44.806 [2024-12-09 11:58:52.516685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.806 [2024-12-09 11:58:52.516701] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45700, cid 4, qid 0 00:23:44.806 [2024-12-09 11:58:52.516903] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.806 [2024-12-09 11:58:52.516910] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.806 [2024-12-09 11:58:52.516913] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.806 [2024-12-09 11:58:52.516917] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ee3690): datao=0, datal=3072, cccid=4 00:23:44.806 [2024-12-09 11:58:52.516922] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f45700) on tqpair(0x1ee3690): expected_datao=0, payload_size=3072 00:23:44.806 [2024-12-09 11:58:52.516926] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.806 [2024-12-09 11:58:52.516933] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.806 [2024-12-09 11:58:52.516936] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:44.806 [2024-12-09 11:58:52.517154] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.806 [2024-12-09 11:58:52.517161] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.806 [2024-12-09 11:58:52.517164] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.806 [2024-12-09 11:58:52.517169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45700) on tqpair=0x1ee3690 00:23:44.806 [2024-12-09 11:58:52.517177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.806 [2024-12-09 11:58:52.517180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ee3690) 00:23:44.806 [2024-12-09 11:58:52.517187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.806 [2024-12-09 11:58:52.517200] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45700, cid 4, qid 0 00:23:44.806 [2024-12-09 11:58:52.517456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.806 [2024-12-09 11:58:52.517463] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.806 [2024-12-09 11:58:52.517466] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.806 [2024-12-09 11:58:52.517470] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ee3690): datao=0, datal=8, cccid=4 00:23:44.806 [2024-12-09 11:58:52.517474] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f45700) on tqpair(0x1ee3690): expected_datao=0, payload_size=8 00:23:44.806 [2024-12-09 11:58:52.517482] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.806 [2024-12-09 11:58:52.517489] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.806 [2024-12-09 11:58:52.517492] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:44.806 [2024-12-09 11:58:52.558826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.806 [2024-12-09 11:58:52.558837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.806 [2024-12-09 11:58:52.558841] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.806 [2024-12-09 11:58:52.558845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45700) on tqpair=0x1ee3690 00:23:44.806 ===================================================== 00:23:44.806 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:44.806 ===================================================== 00:23:44.806 Controller Capabilities/Features 00:23:44.806 ================================ 00:23:44.806 Vendor ID: 0000 00:23:44.806 Subsystem Vendor ID: 0000 00:23:44.806 Serial Number: .................... 00:23:44.806 Model Number: ........................................ 00:23:44.806 Firmware Version: 25.01 00:23:44.806 Recommended Arb Burst: 0 00:23:44.806 IEEE OUI Identifier: 00 00 00 00:23:44.806 Multi-path I/O 00:23:44.806 May have multiple subsystem ports: No 00:23:44.806 May have multiple controllers: No 00:23:44.806 Associated with SR-IOV VF: No 00:23:44.806 Max Data Transfer Size: 131072 00:23:44.806 Max Number of Namespaces: 0 00:23:44.806 Max Number of I/O Queues: 1024 00:23:44.806 NVMe Specification Version (VS): 1.3 00:23:44.806 NVMe Specification Version (Identify): 1.3 00:23:44.806 Maximum Queue Entries: 128 00:23:44.806 Contiguous Queues Required: Yes 00:23:44.806 Arbitration Mechanisms Supported 00:23:44.806 Weighted Round Robin: Not Supported 00:23:44.806 Vendor Specific: Not Supported 00:23:44.806 Reset Timeout: 15000 ms 00:23:44.806 Doorbell Stride: 4 bytes 00:23:44.806 NVM Subsystem Reset: Not Supported 00:23:44.806 Command Sets Supported 00:23:44.806 NVM Command Set: Supported 00:23:44.806 Boot Partition: Not Supported 00:23:44.806 Memory Page Size Minimum: 4096 bytes 00:23:44.806 Memory Page Size Maximum: 4096 bytes 00:23:44.806 Persistent Memory Region: Not Supported 00:23:44.806 Optional Asynchronous Events Supported 00:23:44.806 Namespace Attribute Notices: Not Supported 00:23:44.806 Firmware Activation Notices: Not Supported 00:23:44.806 ANA Change Notices: Not Supported 00:23:44.806 PLE Aggregate Log Change Notices: Not Supported 00:23:44.806 LBA Status Info Alert Notices: Not Supported 00:23:44.806 EGE Aggregate Log Change Notices: Not Supported 00:23:44.806 Normal NVM Subsystem Shutdown event: Not Supported 00:23:44.806 Zone Descriptor Change Notices: Not Supported 00:23:44.806 Discovery Log Change Notices: Supported 00:23:44.806 Controller Attributes 00:23:44.806 128-bit Host Identifier: Not Supported 00:23:44.806 Non-Operational Permissive Mode: Not Supported 00:23:44.806 NVM Sets: Not Supported 00:23:44.806 Read Recovery Levels: Not Supported 00:23:44.806 Endurance Groups: Not Supported 00:23:44.806 Predictable Latency Mode: Not Supported 00:23:44.806 Traffic Based Keep ALive: Not Supported 00:23:44.806 Namespace Granularity: Not Supported 00:23:44.806 SQ Associations: Not Supported 00:23:44.806 UUID List: Not Supported 00:23:44.806 Multi-Domain Subsystem: Not Supported 00:23:44.806 Fixed Capacity Management: Not Supported 00:23:44.806 Variable Capacity Management: Not Supported 00:23:44.806 Delete Endurance Group: Not Supported 00:23:44.806 Delete NVM Set: Not Supported 00:23:44.806 Extended LBA Formats Supported: Not Supported 00:23:44.806 Flexible Data Placement Supported: Not Supported 00:23:44.806 00:23:44.806 Controller Memory Buffer Support 00:23:44.806 ================================ 00:23:44.806 Supported: No 00:23:44.806 00:23:44.806 Persistent Memory Region Support 00:23:44.806 ================================ 00:23:44.806 Supported: No 00:23:44.806 00:23:44.806 Admin Command Set Attributes 00:23:44.806 ============================ 00:23:44.806 Security Send/Receive: Not Supported 00:23:44.806 Format NVM: Not Supported 00:23:44.806 Firmware Activate/Download: Not Supported 00:23:44.806 Namespace Management: Not Supported 00:23:44.806 Device Self-Test: Not Supported 00:23:44.806 Directives: Not Supported 00:23:44.806 NVMe-MI: Not Supported 00:23:44.806 Virtualization Management: Not Supported 00:23:44.806 Doorbell Buffer Config: Not Supported 00:23:44.806 Get LBA Status Capability: Not Supported 00:23:44.806 Command & Feature Lockdown Capability: Not Supported 00:23:44.806 Abort Command Limit: 1 00:23:44.806 Async Event Request Limit: 4 00:23:44.806 Number of Firmware Slots: N/A 00:23:44.806 Firmware Slot 1 Read-Only: N/A 00:23:44.806 Firmware Activation Without Reset: N/A 00:23:44.806 Multiple Update Detection Support: N/A 00:23:44.806 Firmware Update Granularity: No Information Provided 00:23:44.806 Per-Namespace SMART Log: No 00:23:44.806 Asymmetric Namespace Access Log Page: Not Supported 00:23:44.806 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:44.806 Command Effects Log Page: Not Supported 00:23:44.806 Get Log Page Extended Data: Supported 00:23:44.806 Telemetry Log Pages: Not Supported 00:23:44.806 Persistent Event Log Pages: Not Supported 00:23:44.806 Supported Log Pages Log Page: May Support 00:23:44.806 Commands Supported & Effects Log Page: Not Supported 00:23:44.806 Feature Identifiers & Effects Log Page:May Support 00:23:44.806 NVMe-MI Commands & Effects Log Page: May Support 00:23:44.806 Data Area 4 for Telemetry Log: Not Supported 00:23:44.806 Error Log Page Entries Supported: 128 00:23:44.806 Keep Alive: Not Supported 00:23:44.806 00:23:44.806 NVM Command Set Attributes 00:23:44.806 ========================== 00:23:44.806 Submission Queue Entry Size 00:23:44.806 Max: 1 00:23:44.806 Min: 1 00:23:44.806 Completion Queue Entry Size 00:23:44.806 Max: 1 00:23:44.806 Min: 1 00:23:44.806 Number of Namespaces: 0 00:23:44.806 Compare Command: Not Supported 00:23:44.806 Write Uncorrectable Command: Not Supported 00:23:44.806 Dataset Management Command: Not Supported 00:23:44.806 Write Zeroes Command: Not Supported 00:23:44.806 Set Features Save Field: Not Supported 00:23:44.807 Reservations: Not Supported 00:23:44.807 Timestamp: Not Supported 00:23:44.807 Copy: Not Supported 00:23:44.807 Volatile Write Cache: Not Present 00:23:44.807 Atomic Write Unit (Normal): 1 00:23:44.807 Atomic Write Unit (PFail): 1 00:23:44.807 Atomic Compare & Write Unit: 1 00:23:44.807 Fused Compare & Write: Supported 00:23:44.807 Scatter-Gather List 00:23:44.807 SGL Command Set: Supported 00:23:44.807 SGL Keyed: Supported 00:23:44.807 SGL Bit Bucket Descriptor: Not Supported 00:23:44.807 SGL Metadata Pointer: Not Supported 00:23:44.807 Oversized SGL: Not Supported 00:23:44.807 SGL Metadata Address: Not Supported 00:23:44.807 SGL Offset: Supported 00:23:44.807 Transport SGL Data Block: Not Supported 00:23:44.807 Replay Protected Memory Block: Not Supported 00:23:44.807 00:23:44.807 Firmware Slot Information 00:23:44.807 ========================= 00:23:44.807 Active slot: 0 00:23:44.807 00:23:44.807 00:23:44.807 Error Log 00:23:44.807 ========= 00:23:44.807 00:23:44.807 Active Namespaces 00:23:44.807 ================= 00:23:44.807 Discovery Log Page 00:23:44.807 ================== 00:23:44.807 Generation Counter: 2 00:23:44.807 Number of Records: 2 00:23:44.807 Record Format: 0 00:23:44.807 00:23:44.807 Discovery Log Entry 0 00:23:44.807 ---------------------- 00:23:44.807 Transport Type: 3 (TCP) 00:23:44.807 Address Family: 1 (IPv4) 00:23:44.807 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:44.807 Entry Flags: 00:23:44.807 Duplicate Returned Information: 1 00:23:44.807 Explicit Persistent Connection Support for Discovery: 1 00:23:44.807 Transport Requirements: 00:23:44.807 Secure Channel: Not Required 00:23:44.807 Port ID: 0 (0x0000) 00:23:44.807 Controller ID: 65535 (0xffff) 00:23:44.807 Admin Max SQ Size: 128 00:23:44.807 Transport Service Identifier: 4420 00:23:44.807 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:44.807 Transport Address: 10.0.0.2 00:23:44.807 Discovery Log Entry 1 00:23:44.807 ---------------------- 00:23:44.807 Transport Type: 3 (TCP) 00:23:44.807 Address Family: 1 (IPv4) 00:23:44.807 Subsystem Type: 2 (NVM Subsystem) 00:23:44.807 Entry Flags: 00:23:44.807 Duplicate Returned Information: 0 00:23:44.807 Explicit Persistent Connection Support for Discovery: 0 00:23:44.807 Transport Requirements: 00:23:44.807 Secure Channel: Not Required 00:23:44.807 Port ID: 0 (0x0000) 00:23:44.807 Controller ID: 65535 (0xffff) 00:23:44.807 Admin Max SQ Size: 128 00:23:44.807 Transport Service Identifier: 4420 00:23:44.807 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:44.807 Transport Address: 10.0.0.2 [2024-12-09 11:58:52.558936] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:44.807 [2024-12-09 11:58:52.558947] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45100) on tqpair=0x1ee3690 00:23:44.807 [2024-12-09 11:58:52.558953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.807 [2024-12-09 11:58:52.558959] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45280) on tqpair=0x1ee3690 00:23:44.807 [2024-12-09 11:58:52.558964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.807 [2024-12-09 11:58:52.558969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45400) on tqpair=0x1ee3690 00:23:44.807 [2024-12-09 11:58:52.558974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.807 [2024-12-09 11:58:52.558979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45580) on tqpair=0x1ee3690 00:23:44.807 [2024-12-09 11:58:52.558983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.807 [2024-12-09 11:58:52.558994] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.807 [2024-12-09 11:58:52.558998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.807 [2024-12-09 11:58:52.559001] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee3690) 00:23:44.807 [2024-12-09 11:58:52.559009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.807 [2024-12-09 11:58:52.559023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45580, cid 3, qid 0 00:23:44.807 [2024-12-09 11:58:52.559275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.807 [2024-12-09 11:58:52.559281] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.807 [2024-12-09 11:58:52.559285] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.807 [2024-12-09 11:58:52.559289] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45580) on tqpair=0x1ee3690 00:23:44.807 [2024-12-09 11:58:52.559296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.807 [2024-12-09 11:58:52.559300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.807 [2024-12-09 11:58:52.559303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee3690) 00:23:44.807 [2024-12-09 11:58:52.559310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.807 [2024-12-09 11:58:52.559323] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45580, cid 3, qid 0 00:23:44.807 [2024-12-09 11:58:52.559592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.807 [2024-12-09 11:58:52.559600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.807 [2024-12-09 11:58:52.559603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.807 [2024-12-09 11:58:52.559607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45580) on tqpair=0x1ee3690 00:23:44.807 [2024-12-09 11:58:52.559612] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:44.807 [2024-12-09 11:58:52.559618] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:44.807 [2024-12-09 11:58:52.559628] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.807 [2024-12-09 11:58:52.559632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.807 [2024-12-09 11:58:52.559635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee3690) 00:23:44.807 [2024-12-09 11:58:52.559646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.807 [2024-12-09 11:58:52.559657] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45580, cid 3, qid 0 00:23:44.807 [2024-12-09 11:58:52.559793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.807 [2024-12-09 11:58:52.559799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.807 [2024-12-09 11:58:52.559803] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.807 [2024-12-09 11:58:52.559807] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45580) on tqpair=0x1ee3690 00:23:44.807 [2024-12-09 11:58:52.559817] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.807 [2024-12-09 11:58:52.559821] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.807 [2024-12-09 11:58:52.559824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee3690) 00:23:44.807 [2024-12-09 11:58:52.559831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.807 [2024-12-09 11:58:52.559841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45580, cid 3, qid 0 00:23:44.807 [2024-12-09 11:58:52.559995] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.807 [2024-12-09 11:58:52.560001] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.807 [2024-12-09 11:58:52.560004] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.807 [2024-12-09 11:58:52.560008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45580) on tqpair=0x1ee3690 00:23:44.807 [2024-12-09 11:58:52.560018] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.807 [2024-12-09 11:58:52.560022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.807 [2024-12-09 11:58:52.560025] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee3690) 00:23:44.807 [2024-12-09 11:58:52.560032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.807 [2024-12-09 11:58:52.560042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45580, cid 3, qid 0 00:23:44.807 [2024-12-09 11:58:52.560203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.807 [2024-12-09 11:58:52.560209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.807 [2024-12-09 11:58:52.560212] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.807 [2024-12-09 11:58:52.560216] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45580) on tqpair=0x1ee3690 00:23:44.807 [2024-12-09 11:58:52.560226] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.807 [2024-12-09 11:58:52.560230] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.807 [2024-12-09 11:58:52.560234] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee3690) 00:23:44.807 [2024-12-09 11:58:52.560240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.807 [2024-12-09 11:58:52.560250] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45580, cid 3, qid 0 00:23:44.807 [2024-12-09 11:58:52.560448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.807 [2024-12-09 11:58:52.560455] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.807 [2024-12-09 11:58:52.560458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.807 [2024-12-09 11:58:52.560466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45580) on tqpair=0x1ee3690 00:23:44.807 [2024-12-09 11:58:52.560476] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.807 [2024-12-09 11:58:52.560480] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.807 [2024-12-09 11:58:52.560483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee3690) 00:23:44.807 [2024-12-09 11:58:52.560490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.807 [2024-12-09 11:58:52.560500] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45580, cid 3, qid 0 00:23:44.807 [2024-12-09 11:58:52.564644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.807 [2024-12-09 11:58:52.564653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.807 [2024-12-09 11:58:52.564656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.808 [2024-12-09 11:58:52.564660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45580) on tqpair=0x1ee3690 00:23:44.808 [2024-12-09 11:58:52.564670] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.808 [2024-12-09 11:58:52.564674] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.808 [2024-12-09 11:58:52.564678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee3690) 00:23:44.808 [2024-12-09 11:58:52.564684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.808 [2024-12-09 11:58:52.564696] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45580, cid 3, qid 0 00:23:44.808 [2024-12-09 11:58:52.564879] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.808 [2024-12-09 11:58:52.564886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.808 [2024-12-09 11:58:52.564889] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.808 [2024-12-09 11:58:52.564893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45580) on tqpair=0x1ee3690 00:23:44.808 [2024-12-09 11:58:52.564901] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:23:44.808 00:23:44.808 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:44.808 [2024-12-09 11:58:52.607581] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:23:44.808 [2024-12-09 11:58:52.607631] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139317 ] 00:23:44.808 [2024-12-09 11:58:52.662691] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:44.808 [2024-12-09 11:58:52.662741] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:44.808 [2024-12-09 11:58:52.662746] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:44.808 [2024-12-09 11:58:52.662763] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:44.808 [2024-12-09 11:58:52.662771] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:44.808 [2024-12-09 11:58:52.663306] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:44.808 [2024-12-09 11:58:52.663338] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x207e690 0 00:23:44.808 [2024-12-09 11:58:52.669646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:44.808 [2024-12-09 11:58:52.669665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:44.808 [2024-12-09 11:58:52.669671] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:44.808 [2024-12-09 11:58:52.669675] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:44.808 [2024-12-09 11:58:52.669703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.808 [2024-12-09 11:58:52.669709] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.808 [2024-12-09 11:58:52.669712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x207e690) 00:23:44.808 [2024-12-09 11:58:52.669724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:44.808 [2024-12-09 11:58:52.669742] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0100, cid 0, qid 0 00:23:44.808 [2024-12-09 11:58:52.677649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.808 [2024-12-09 11:58:52.677658] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.808 [2024-12-09 11:58:52.677662] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.808 [2024-12-09 11:58:52.677667] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0100) on tqpair=0x207e690 00:23:44.808 [2024-12-09 11:58:52.677675] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:44.808 [2024-12-09 11:58:52.677682] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:44.808 [2024-12-09 11:58:52.677687] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:44.808 [2024-12-09 11:58:52.677701] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.808 [2024-12-09 11:58:52.677705] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.808 [2024-12-09 11:58:52.677709] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x207e690) 00:23:44.808 [2024-12-09 11:58:52.677716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.808 [2024-12-09 11:58:52.677729] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0100, cid 0, qid 0 00:23:44.808 [2024-12-09 11:58:52.677883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.808 [2024-12-09 11:58:52.677890] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.808 [2024-12-09 11:58:52.677894] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.808 [2024-12-09 11:58:52.677898] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0100) on tqpair=0x207e690 00:23:44.808 [2024-12-09 11:58:52.677905] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:44.808 [2024-12-09 11:58:52.677912] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:44.808 [2024-12-09 11:58:52.677919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.808 [2024-12-09 11:58:52.677923] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.808 [2024-12-09 11:58:52.677927] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x207e690) 00:23:44.808 [2024-12-09 11:58:52.677934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.808 [2024-12-09 11:58:52.677944] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0100, cid 0, qid 0 00:23:44.808 [2024-12-09 11:58:52.678104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.808 [2024-12-09 11:58:52.678111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.808 [2024-12-09 11:58:52.678114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.808 [2024-12-09 11:58:52.678118] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0100) on tqpair=0x207e690 00:23:44.808 [2024-12-09 11:58:52.678124] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:44.808 [2024-12-09 11:58:52.678134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:44.808 [2024-12-09 11:58:52.678141] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.808 [2024-12-09 11:58:52.678145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.808 [2024-12-09 11:58:52.678149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x207e690) 00:23:44.808 [2024-12-09 11:58:52.678155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.808 [2024-12-09 11:58:52.678166] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0100, cid 0, qid 0 00:23:44.808 [2024-12-09 11:58:52.678339] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.808 [2024-12-09 11:58:52.678346] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.808 [2024-12-09 11:58:52.678349] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.808 [2024-12-09 11:58:52.678353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0100) on tqpair=0x207e690 00:23:44.808 [2024-12-09 11:58:52.678358] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:44.808 [2024-12-09 11:58:52.678367] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.808 [2024-12-09 11:58:52.678371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.808 [2024-12-09 11:58:52.678375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x207e690) 00:23:44.808 [2024-12-09 11:58:52.678382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.808 [2024-12-09 11:58:52.678392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0100, cid 0, qid 0 00:23:44.808 [2024-12-09 11:58:52.678573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.808 [2024-12-09 11:58:52.678579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.808 [2024-12-09 11:58:52.678582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.808 [2024-12-09 11:58:52.678586] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0100) on tqpair=0x207e690 00:23:44.808 [2024-12-09 11:58:52.678591] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:44.808 [2024-12-09 11:58:52.678595] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:44.808 [2024-12-09 11:58:52.678603] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:44.808 [2024-12-09 11:58:52.678711] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:44.808 [2024-12-09 11:58:52.678716] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:44.809 [2024-12-09 11:58:52.678723] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.809 [2024-12-09 11:58:52.678727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.809 [2024-12-09 11:58:52.678731] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x207e690) 00:23:44.809 [2024-12-09 11:58:52.678738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.809 [2024-12-09 11:58:52.678748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0100, cid 0, qid 0 00:23:44.809 [2024-12-09 11:58:52.678953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.809 [2024-12-09 11:58:52.678959] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.809 [2024-12-09 11:58:52.678965] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.809 [2024-12-09 11:58:52.678969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0100) on tqpair=0x207e690 00:23:44.809 [2024-12-09 11:58:52.678973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:44.809 [2024-12-09 11:58:52.678983] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.809 [2024-12-09 11:58:52.678987] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.809 [2024-12-09 11:58:52.678991] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x207e690) 00:23:44.809 [2024-12-09 11:58:52.678997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.809 [2024-12-09 11:58:52.679007] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0100, cid 0, qid 0 00:23:44.809 [2024-12-09 11:58:52.679202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.809 [2024-12-09 11:58:52.679209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.809 [2024-12-09 11:58:52.679212] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.809 [2024-12-09 11:58:52.679216] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0100) on tqpair=0x207e690 00:23:44.809 [2024-12-09 11:58:52.679221] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:44.809 [2024-12-09 11:58:52.679226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:44.809 [2024-12-09 11:58:52.679233] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:44.809 [2024-12-09 11:58:52.679248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:44.809 [2024-12-09 11:58:52.679256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.809 [2024-12-09 11:58:52.679260] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x207e690) 00:23:44.809 [2024-12-09 11:58:52.679267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.809 [2024-12-09 11:58:52.679278] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0100, cid 0, qid 0 00:23:44.809 [2024-12-09 11:58:52.679481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.809 [2024-12-09 11:58:52.679488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.809 [2024-12-09 11:58:52.679492] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.809 [2024-12-09 11:58:52.679496] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x207e690): datao=0, datal=4096, cccid=0 00:23:44.809 [2024-12-09 11:58:52.679501] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20e0100) on tqpair(0x207e690): expected_datao=0, payload_size=4096 00:23:44.809 [2024-12-09 11:58:52.679505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.809 [2024-12-09 11:58:52.679512] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.809 [2024-12-09 11:58:52.679516] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.074 [2024-12-09 11:58:52.719772] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.074 [2024-12-09 11:58:52.719782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.074 [2024-12-09 11:58:52.719786] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.074 [2024-12-09 11:58:52.719790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0100) on tqpair=0x207e690 00:23:45.074 [2024-12-09 11:58:52.719801] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:45.074 [2024-12-09 11:58:52.719806] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:45.074 [2024-12-09 11:58:52.719813] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:45.074 [2024-12-09 11:58:52.719817] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:45.074 [2024-12-09 11:58:52.719822] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:45.074 [2024-12-09 11:58:52.719827] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:45.074 [2024-12-09 11:58:52.719835] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:45.074 [2024-12-09 11:58:52.719842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.074 [2024-12-09 11:58:52.719846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.074 [2024-12-09 11:58:52.719850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x207e690) 00:23:45.074 [2024-12-09 11:58:52.719857] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:45.074 [2024-12-09 11:58:52.719869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0100, cid 0, qid 0 00:23:45.074 [2024-12-09 11:58:52.720082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.074 [2024-12-09 11:58:52.720088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.074 [2024-12-09 11:58:52.720092] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.074 [2024-12-09 11:58:52.720096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0100) on tqpair=0x207e690 00:23:45.074 [2024-12-09 11:58:52.720103] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.074 [2024-12-09 11:58:52.720106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.074 [2024-12-09 11:58:52.720110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x207e690) 00:23:45.074 [2024-12-09 11:58:52.720116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.074 [2024-12-09 11:58:52.720122] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.074 [2024-12-09 11:58:52.720126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.074 [2024-12-09 11:58:52.720130] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x207e690) 00:23:45.074 [2024-12-09 11:58:52.720135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.074 [2024-12-09 11:58:52.720142] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.074 [2024-12-09 11:58:52.720145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.074 [2024-12-09 11:58:52.720149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x207e690) 00:23:45.074 [2024-12-09 11:58:52.720154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.074 [2024-12-09 11:58:52.720161] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.074 [2024-12-09 11:58:52.720164] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.074 [2024-12-09 11:58:52.720168] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207e690) 00:23:45.074 [2024-12-09 11:58:52.720174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.074 [2024-12-09 11:58:52.720178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:45.074 [2024-12-09 11:58:52.720188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:45.074 [2024-12-09 11:58:52.720195] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.074 [2024-12-09 11:58:52.720200] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x207e690) 00:23:45.074 [2024-12-09 11:58:52.720207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.074 [2024-12-09 11:58:52.720219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0100, cid 0, qid 0 00:23:45.074 [2024-12-09 11:58:52.720225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0280, cid 1, qid 0 00:23:45.074 [2024-12-09 11:58:52.720229] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0400, cid 2, qid 0 00:23:45.074 [2024-12-09 11:58:52.720234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0580, cid 3, qid 0 00:23:45.074 [2024-12-09 11:58:52.720239] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0700, cid 4, qid 0 00:23:45.074 [2024-12-09 11:58:52.720408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.074 [2024-12-09 11:58:52.720415] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.074 [2024-12-09 11:58:52.720418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.074 [2024-12-09 11:58:52.720422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0700) on tqpair=0x207e690 00:23:45.074 [2024-12-09 11:58:52.720427] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:45.074 [2024-12-09 11:58:52.720432] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:45.074 [2024-12-09 11:58:52.720440] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:45.074 [2024-12-09 11:58:52.720447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:45.074 [2024-12-09 11:58:52.720453] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.074 [2024-12-09 11:58:52.720457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.074 [2024-12-09 11:58:52.720461] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x207e690) 00:23:45.075 [2024-12-09 11:58:52.720467] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:45.075 [2024-12-09 11:58:52.720477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0700, cid 4, qid 0 00:23:45.075 [2024-12-09 11:58:52.724645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.075 [2024-12-09 11:58:52.724653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.075 [2024-12-09 11:58:52.724657] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.724661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0700) on tqpair=0x207e690 00:23:45.075 [2024-12-09 11:58:52.724727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:45.075 [2024-12-09 11:58:52.724736] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:45.075 [2024-12-09 11:58:52.724744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.724748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x207e690) 00:23:45.075 [2024-12-09 11:58:52.724754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.075 [2024-12-09 11:58:52.724766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0700, cid 4, qid 0 00:23:45.075 [2024-12-09 11:58:52.724960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.075 [2024-12-09 11:58:52.724967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.075 [2024-12-09 11:58:52.724972] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.724976] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x207e690): datao=0, datal=4096, cccid=4 00:23:45.075 [2024-12-09 11:58:52.724981] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20e0700) on tqpair(0x207e690): expected_datao=0, payload_size=4096 00:23:45.075 [2024-12-09 11:58:52.724985] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.724992] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.724996] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.725155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.075 [2024-12-09 11:58:52.725161] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.075 [2024-12-09 11:58:52.725164] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.725168] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0700) on tqpair=0x207e690 00:23:45.075 [2024-12-09 11:58:52.725177] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:45.075 [2024-12-09 11:58:52.725188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:45.075 [2024-12-09 11:58:52.725197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:45.075 [2024-12-09 11:58:52.725204] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.725208] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x207e690) 00:23:45.075 [2024-12-09 11:58:52.725215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.075 [2024-12-09 11:58:52.725226] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0700, cid 4, qid 0 00:23:45.075 [2024-12-09 11:58:52.725453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.075 [2024-12-09 11:58:52.725460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.075 [2024-12-09 11:58:52.725464] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.725468] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x207e690): datao=0, datal=4096, cccid=4 00:23:45.075 [2024-12-09 11:58:52.725472] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20e0700) on tqpair(0x207e690): expected_datao=0, payload_size=4096 00:23:45.075 [2024-12-09 11:58:52.725477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.725483] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.725487] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.725670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.075 [2024-12-09 11:58:52.725677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.075 [2024-12-09 11:58:52.725680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.725684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0700) on tqpair=0x207e690 00:23:45.075 [2024-12-09 11:58:52.725696] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:45.075 [2024-12-09 11:58:52.725705] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:45.075 [2024-12-09 11:58:52.725712] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.725716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x207e690) 00:23:45.075 [2024-12-09 11:58:52.725723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.075 [2024-12-09 11:58:52.725735] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0700, cid 4, qid 0 00:23:45.075 [2024-12-09 11:58:52.725936] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.075 [2024-12-09 11:58:52.725943] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.075 [2024-12-09 11:58:52.725946] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.725950] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x207e690): datao=0, datal=4096, cccid=4 00:23:45.075 [2024-12-09 11:58:52.725954] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20e0700) on tqpair(0x207e690): expected_datao=0, payload_size=4096 00:23:45.075 [2024-12-09 11:58:52.725959] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.725978] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.725982] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.726122] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.075 [2024-12-09 11:58:52.726128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.075 [2024-12-09 11:58:52.726131] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.726135] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0700) on tqpair=0x207e690 00:23:45.075 [2024-12-09 11:58:52.726143] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:45.075 [2024-12-09 11:58:52.726151] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:45.075 [2024-12-09 11:58:52.726159] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:45.075 [2024-12-09 11:58:52.726167] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:45.075 [2024-12-09 11:58:52.726172] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:45.075 [2024-12-09 11:58:52.726178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:45.075 [2024-12-09 11:58:52.726183] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:45.075 [2024-12-09 11:58:52.726187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:45.075 [2024-12-09 11:58:52.726192] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:45.075 [2024-12-09 11:58:52.726206] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.726210] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x207e690) 00:23:45.075 [2024-12-09 11:58:52.726216] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.075 [2024-12-09 11:58:52.726223] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.726227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.726230] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x207e690) 00:23:45.075 [2024-12-09 11:58:52.726237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.075 [2024-12-09 11:58:52.726250] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0700, cid 4, qid 0 00:23:45.075 [2024-12-09 11:58:52.726255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0880, cid 5, qid 0 00:23:45.075 [2024-12-09 11:58:52.726410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.075 [2024-12-09 11:58:52.726418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.075 [2024-12-09 11:58:52.726422] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.726426] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0700) on tqpair=0x207e690 00:23:45.075 [2024-12-09 11:58:52.726432] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.075 [2024-12-09 11:58:52.726438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.075 [2024-12-09 11:58:52.726442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.726446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0880) on tqpair=0x207e690 00:23:45.075 [2024-12-09 11:58:52.726455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.726459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x207e690) 00:23:45.075 [2024-12-09 11:58:52.726465] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.075 [2024-12-09 11:58:52.726475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0880, cid 5, qid 0 00:23:45.075 [2024-12-09 11:58:52.726672] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.075 [2024-12-09 11:58:52.726679] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.075 [2024-12-09 11:58:52.726683] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.726687] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0880) on tqpair=0x207e690 00:23:45.075 [2024-12-09 11:58:52.726696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.075 [2024-12-09 11:58:52.726700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x207e690) 00:23:45.075 [2024-12-09 11:58:52.726706] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.075 [2024-12-09 11:58:52.726716] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0880, cid 5, qid 0 00:23:45.075 [2024-12-09 11:58:52.726928] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.075 [2024-12-09 11:58:52.726934] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.075 [2024-12-09 11:58:52.726937] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.726941] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0880) on tqpair=0x207e690 00:23:45.076 [2024-12-09 11:58:52.726950] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.726954] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x207e690) 00:23:45.076 [2024-12-09 11:58:52.726961] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.076 [2024-12-09 11:58:52.726970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0880, cid 5, qid 0 00:23:45.076 [2024-12-09 11:58:52.727192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.076 [2024-12-09 11:58:52.727198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.076 [2024-12-09 11:58:52.727203] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.727206] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0880) on tqpair=0x207e690 00:23:45.076 [2024-12-09 11:58:52.727222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.727226] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x207e690) 00:23:45.076 [2024-12-09 11:58:52.727233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.076 [2024-12-09 11:58:52.727240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.727244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x207e690) 00:23:45.076 [2024-12-09 11:58:52.727252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.076 [2024-12-09 11:58:52.727259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.727263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x207e690) 00:23:45.076 [2024-12-09 11:58:52.727269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.076 [2024-12-09 11:58:52.727277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.727280] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x207e690) 00:23:45.076 [2024-12-09 11:58:52.727287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.076 [2024-12-09 11:58:52.727298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0880, cid 5, qid 0 00:23:45.076 [2024-12-09 11:58:52.727303] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0700, cid 4, qid 0 00:23:45.076 [2024-12-09 11:58:52.727308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0a00, cid 6, qid 0 00:23:45.076 [2024-12-09 11:58:52.727313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0b80, cid 7, qid 0 00:23:45.076 [2024-12-09 11:58:52.727563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.076 [2024-12-09 11:58:52.727570] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.076 [2024-12-09 11:58:52.727573] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.727577] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x207e690): datao=0, datal=8192, cccid=5 00:23:45.076 [2024-12-09 11:58:52.727581] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20e0880) on tqpair(0x207e690): expected_datao=0, payload_size=8192 00:23:45.076 [2024-12-09 11:58:52.727586] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.727655] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.727660] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.727666] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.076 [2024-12-09 11:58:52.727672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.076 [2024-12-09 11:58:52.727675] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.727679] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x207e690): datao=0, datal=512, cccid=4 00:23:45.076 [2024-12-09 11:58:52.727683] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20e0700) on tqpair(0x207e690): expected_datao=0, payload_size=512 00:23:45.076 [2024-12-09 11:58:52.727688] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.727694] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.727698] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.727704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.076 [2024-12-09 11:58:52.727709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.076 [2024-12-09 11:58:52.727713] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.727716] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x207e690): datao=0, datal=512, cccid=6 00:23:45.076 [2024-12-09 11:58:52.727721] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20e0a00) on tqpair(0x207e690): expected_datao=0, payload_size=512 00:23:45.076 [2024-12-09 11:58:52.727725] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.727732] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.727737] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.727743] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.076 [2024-12-09 11:58:52.727749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.076 [2024-12-09 11:58:52.727752] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.727756] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x207e690): datao=0, datal=4096, cccid=7 00:23:45.076 [2024-12-09 11:58:52.727760] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20e0b80) on tqpair(0x207e690): expected_datao=0, payload_size=4096 00:23:45.076 [2024-12-09 11:58:52.727764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.727781] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.727785] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.727948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.076 [2024-12-09 11:58:52.727954] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.076 [2024-12-09 11:58:52.727958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.727962] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0880) on tqpair=0x207e690 00:23:45.076 [2024-12-09 11:58:52.727974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.076 [2024-12-09 11:58:52.727980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.076 [2024-12-09 11:58:52.727983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.727987] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0700) on tqpair=0x207e690 00:23:45.076 [2024-12-09 11:58:52.727997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.076 [2024-12-09 11:58:52.728003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.076 [2024-12-09 11:58:52.728006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.728010] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0a00) on tqpair=0x207e690 00:23:45.076 [2024-12-09 11:58:52.728017] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.076 [2024-12-09 11:58:52.728023] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.076 [2024-12-09 11:58:52.728027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.076 [2024-12-09 11:58:52.728030] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0b80) on tqpair=0x207e690 00:23:45.076 ===================================================== 00:23:45.076 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:45.076 ===================================================== 00:23:45.076 Controller Capabilities/Features 00:23:45.076 ================================ 00:23:45.076 Vendor ID: 8086 00:23:45.076 Subsystem Vendor ID: 8086 00:23:45.076 Serial Number: SPDK00000000000001 00:23:45.076 Model Number: SPDK bdev Controller 00:23:45.076 Firmware Version: 25.01 00:23:45.076 Recommended Arb Burst: 6 00:23:45.076 IEEE OUI Identifier: e4 d2 5c 00:23:45.076 Multi-path I/O 00:23:45.076 May have multiple subsystem ports: Yes 00:23:45.076 May have multiple controllers: Yes 00:23:45.076 Associated with SR-IOV VF: No 00:23:45.076 Max Data Transfer Size: 131072 00:23:45.076 Max Number of Namespaces: 32 00:23:45.076 Max Number of I/O Queues: 127 00:23:45.076 NVMe Specification Version (VS): 1.3 00:23:45.076 NVMe Specification Version (Identify): 1.3 00:23:45.076 Maximum Queue Entries: 128 00:23:45.076 Contiguous Queues Required: Yes 00:23:45.076 Arbitration Mechanisms Supported 00:23:45.076 Weighted Round Robin: Not Supported 00:23:45.076 Vendor Specific: Not Supported 00:23:45.076 Reset Timeout: 15000 ms 00:23:45.076 Doorbell Stride: 4 bytes 00:23:45.076 NVM Subsystem Reset: Not Supported 00:23:45.076 Command Sets Supported 00:23:45.076 NVM Command Set: Supported 00:23:45.076 Boot Partition: Not Supported 00:23:45.076 Memory Page Size Minimum: 4096 bytes 00:23:45.076 Memory Page Size Maximum: 4096 bytes 00:23:45.076 Persistent Memory Region: Not Supported 00:23:45.076 Optional Asynchronous Events Supported 00:23:45.076 Namespace Attribute Notices: Supported 00:23:45.076 Firmware Activation Notices: Not Supported 00:23:45.076 ANA Change Notices: Not Supported 00:23:45.076 PLE Aggregate Log Change Notices: Not Supported 00:23:45.076 LBA Status Info Alert Notices: Not Supported 00:23:45.076 EGE Aggregate Log Change Notices: Not Supported 00:23:45.076 Normal NVM Subsystem Shutdown event: Not Supported 00:23:45.076 Zone Descriptor Change Notices: Not Supported 00:23:45.076 Discovery Log Change Notices: Not Supported 00:23:45.076 Controller Attributes 00:23:45.076 128-bit Host Identifier: Supported 00:23:45.076 Non-Operational Permissive Mode: Not Supported 00:23:45.076 NVM Sets: Not Supported 00:23:45.076 Read Recovery Levels: Not Supported 00:23:45.076 Endurance Groups: Not Supported 00:23:45.076 Predictable Latency Mode: Not Supported 00:23:45.076 Traffic Based Keep ALive: Not Supported 00:23:45.076 Namespace Granularity: Not Supported 00:23:45.076 SQ Associations: Not Supported 00:23:45.076 UUID List: Not Supported 00:23:45.076 Multi-Domain Subsystem: Not Supported 00:23:45.076 Fixed Capacity Management: Not Supported 00:23:45.076 Variable Capacity Management: Not Supported 00:23:45.076 Delete Endurance Group: Not Supported 00:23:45.076 Delete NVM Set: Not Supported 00:23:45.076 Extended LBA Formats Supported: Not Supported 00:23:45.077 Flexible Data Placement Supported: Not Supported 00:23:45.077 00:23:45.077 Controller Memory Buffer Support 00:23:45.077 ================================ 00:23:45.077 Supported: No 00:23:45.077 00:23:45.077 Persistent Memory Region Support 00:23:45.077 ================================ 00:23:45.077 Supported: No 00:23:45.077 00:23:45.077 Admin Command Set Attributes 00:23:45.077 ============================ 00:23:45.077 Security Send/Receive: Not Supported 00:23:45.077 Format NVM: Not Supported 00:23:45.077 Firmware Activate/Download: Not Supported 00:23:45.077 Namespace Management: Not Supported 00:23:45.077 Device Self-Test: Not Supported 00:23:45.077 Directives: Not Supported 00:23:45.077 NVMe-MI: Not Supported 00:23:45.077 Virtualization Management: Not Supported 00:23:45.077 Doorbell Buffer Config: Not Supported 00:23:45.077 Get LBA Status Capability: Not Supported 00:23:45.077 Command & Feature Lockdown Capability: Not Supported 00:23:45.077 Abort Command Limit: 4 00:23:45.077 Async Event Request Limit: 4 00:23:45.077 Number of Firmware Slots: N/A 00:23:45.077 Firmware Slot 1 Read-Only: N/A 00:23:45.077 Firmware Activation Without Reset: N/A 00:23:45.077 Multiple Update Detection Support: N/A 00:23:45.077 Firmware Update Granularity: No Information Provided 00:23:45.077 Per-Namespace SMART Log: No 00:23:45.077 Asymmetric Namespace Access Log Page: Not Supported 00:23:45.077 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:45.077 Command Effects Log Page: Supported 00:23:45.077 Get Log Page Extended Data: Supported 00:23:45.077 Telemetry Log Pages: Not Supported 00:23:45.077 Persistent Event Log Pages: Not Supported 00:23:45.077 Supported Log Pages Log Page: May Support 00:23:45.077 Commands Supported & Effects Log Page: Not Supported 00:23:45.077 Feature Identifiers & Effects Log Page:May Support 00:23:45.077 NVMe-MI Commands & Effects Log Page: May Support 00:23:45.077 Data Area 4 for Telemetry Log: Not Supported 00:23:45.077 Error Log Page Entries Supported: 128 00:23:45.077 Keep Alive: Supported 00:23:45.077 Keep Alive Granularity: 10000 ms 00:23:45.077 00:23:45.077 NVM Command Set Attributes 00:23:45.077 ========================== 00:23:45.077 Submission Queue Entry Size 00:23:45.077 Max: 64 00:23:45.077 Min: 64 00:23:45.077 Completion Queue Entry Size 00:23:45.077 Max: 16 00:23:45.077 Min: 16 00:23:45.077 Number of Namespaces: 32 00:23:45.077 Compare Command: Supported 00:23:45.077 Write Uncorrectable Command: Not Supported 00:23:45.077 Dataset Management Command: Supported 00:23:45.077 Write Zeroes Command: Supported 00:23:45.077 Set Features Save Field: Not Supported 00:23:45.077 Reservations: Supported 00:23:45.077 Timestamp: Not Supported 00:23:45.077 Copy: Supported 00:23:45.077 Volatile Write Cache: Present 00:23:45.077 Atomic Write Unit (Normal): 1 00:23:45.077 Atomic Write Unit (PFail): 1 00:23:45.077 Atomic Compare & Write Unit: 1 00:23:45.077 Fused Compare & Write: Supported 00:23:45.077 Scatter-Gather List 00:23:45.077 SGL Command Set: Supported 00:23:45.077 SGL Keyed: Supported 00:23:45.077 SGL Bit Bucket Descriptor: Not Supported 00:23:45.077 SGL Metadata Pointer: Not Supported 00:23:45.077 Oversized SGL: Not Supported 00:23:45.077 SGL Metadata Address: Not Supported 00:23:45.077 SGL Offset: Supported 00:23:45.077 Transport SGL Data Block: Not Supported 00:23:45.077 Replay Protected Memory Block: Not Supported 00:23:45.077 00:23:45.077 Firmware Slot Information 00:23:45.077 ========================= 00:23:45.077 Active slot: 1 00:23:45.077 Slot 1 Firmware Revision: 25.01 00:23:45.077 00:23:45.077 00:23:45.077 Commands Supported and Effects 00:23:45.077 ============================== 00:23:45.077 Admin Commands 00:23:45.077 -------------- 00:23:45.077 Get Log Page (02h): Supported 00:23:45.077 Identify (06h): Supported 00:23:45.077 Abort (08h): Supported 00:23:45.077 Set Features (09h): Supported 00:23:45.077 Get Features (0Ah): Supported 00:23:45.077 Asynchronous Event Request (0Ch): Supported 00:23:45.077 Keep Alive (18h): Supported 00:23:45.077 I/O Commands 00:23:45.077 ------------ 00:23:45.077 Flush (00h): Supported LBA-Change 00:23:45.077 Write (01h): Supported LBA-Change 00:23:45.077 Read (02h): Supported 00:23:45.077 Compare (05h): Supported 00:23:45.077 Write Zeroes (08h): Supported LBA-Change 00:23:45.077 Dataset Management (09h): Supported LBA-Change 00:23:45.077 Copy (19h): Supported LBA-Change 00:23:45.077 00:23:45.077 Error Log 00:23:45.077 ========= 00:23:45.077 00:23:45.077 Arbitration 00:23:45.077 =========== 00:23:45.077 Arbitration Burst: 1 00:23:45.077 00:23:45.077 Power Management 00:23:45.077 ================ 00:23:45.077 Number of Power States: 1 00:23:45.077 Current Power State: Power State #0 00:23:45.077 Power State #0: 00:23:45.077 Max Power: 0.00 W 00:23:45.077 Non-Operational State: Operational 00:23:45.077 Entry Latency: Not Reported 00:23:45.077 Exit Latency: Not Reported 00:23:45.077 Relative Read Throughput: 0 00:23:45.077 Relative Read Latency: 0 00:23:45.077 Relative Write Throughput: 0 00:23:45.077 Relative Write Latency: 0 00:23:45.077 Idle Power: Not Reported 00:23:45.077 Active Power: Not Reported 00:23:45.077 Non-Operational Permissive Mode: Not Supported 00:23:45.077 00:23:45.077 Health Information 00:23:45.077 ================== 00:23:45.077 Critical Warnings: 00:23:45.077 Available Spare Space: OK 00:23:45.077 Temperature: OK 00:23:45.077 Device Reliability: OK 00:23:45.077 Read Only: No 00:23:45.077 Volatile Memory Backup: OK 00:23:45.077 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:45.077 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:45.077 Available Spare: 0% 00:23:45.077 Available Spare Threshold: 0% 00:23:45.077 Life Percentage Used:[2024-12-09 11:58:52.728125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.077 [2024-12-09 11:58:52.728130] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x207e690) 00:23:45.077 [2024-12-09 11:58:52.728137] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.077 [2024-12-09 11:58:52.728149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0b80, cid 7, qid 0 00:23:45.077 [2024-12-09 11:58:52.728336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.077 [2024-12-09 11:58:52.728343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.077 [2024-12-09 11:58:52.728346] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.077 [2024-12-09 11:58:52.728350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0b80) on tqpair=0x207e690 00:23:45.077 [2024-12-09 11:58:52.728383] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:45.077 [2024-12-09 11:58:52.728392] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0100) on tqpair=0x207e690 00:23:45.077 [2024-12-09 11:58:52.728398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.077 [2024-12-09 11:58:52.728404] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0280) on tqpair=0x207e690 00:23:45.077 [2024-12-09 11:58:52.728410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.077 [2024-12-09 11:58:52.728415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0400) on tqpair=0x207e690 00:23:45.077 [2024-12-09 11:58:52.728420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.077 [2024-12-09 11:58:52.728425] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0580) on tqpair=0x207e690 00:23:45.077 [2024-12-09 11:58:52.728429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.077 [2024-12-09 11:58:52.728437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.077 [2024-12-09 11:58:52.728441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.077 [2024-12-09 11:58:52.728444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207e690) 00:23:45.077 [2024-12-09 11:58:52.728451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.077 [2024-12-09 11:58:52.728463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0580, cid 3, qid 0 00:23:45.077 [2024-12-09 11:58:52.732647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.077 [2024-12-09 11:58:52.732656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.077 [2024-12-09 11:58:52.732659] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.077 [2024-12-09 11:58:52.732663] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0580) on tqpair=0x207e690 00:23:45.077 [2024-12-09 11:58:52.732670] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.077 [2024-12-09 11:58:52.732674] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.077 [2024-12-09 11:58:52.732677] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207e690) 00:23:45.077 [2024-12-09 11:58:52.732684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.077 [2024-12-09 11:58:52.732698] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0580, cid 3, qid 0 00:23:45.077 [2024-12-09 11:58:52.732884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.077 [2024-12-09 11:58:52.732891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.077 [2024-12-09 11:58:52.732894] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.077 [2024-12-09 11:58:52.732898] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0580) on tqpair=0x207e690 00:23:45.077 [2024-12-09 11:58:52.732903] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:45.077 [2024-12-09 11:58:52.732908] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:45.077 [2024-12-09 11:58:52.732917] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.732921] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.732924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207e690) 00:23:45.078 [2024-12-09 11:58:52.732931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.078 [2024-12-09 11:58:52.732941] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0580, cid 3, qid 0 00:23:45.078 [2024-12-09 11:58:52.733108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.078 [2024-12-09 11:58:52.733115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.078 [2024-12-09 11:58:52.733119] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.733123] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0580) on tqpair=0x207e690 00:23:45.078 [2024-12-09 11:58:52.733132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.733136] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.733142] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207e690) 00:23:45.078 [2024-12-09 11:58:52.733149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.078 [2024-12-09 11:58:52.733159] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0580, cid 3, qid 0 00:23:45.078 [2024-12-09 11:58:52.733359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.078 [2024-12-09 11:58:52.733366] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.078 [2024-12-09 11:58:52.733370] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.733374] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0580) on tqpair=0x207e690 00:23:45.078 [2024-12-09 11:58:52.733383] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.733387] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.733391] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207e690) 00:23:45.078 [2024-12-09 11:58:52.733397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.078 [2024-12-09 11:58:52.733407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0580, cid 3, qid 0 00:23:45.078 [2024-12-09 11:58:52.733579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.078 [2024-12-09 11:58:52.733585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.078 [2024-12-09 11:58:52.733589] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.733592] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0580) on tqpair=0x207e690 00:23:45.078 [2024-12-09 11:58:52.733602] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.733606] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.733610] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207e690) 00:23:45.078 [2024-12-09 11:58:52.733617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.078 [2024-12-09 11:58:52.733627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0580, cid 3, qid 0 00:23:45.078 [2024-12-09 11:58:52.733824] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.078 [2024-12-09 11:58:52.733831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.078 [2024-12-09 11:58:52.733834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.733838] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0580) on tqpair=0x207e690 00:23:45.078 [2024-12-09 11:58:52.733849] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.733853] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.733856] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207e690) 00:23:45.078 [2024-12-09 11:58:52.733863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.078 [2024-12-09 11:58:52.733873] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0580, cid 3, qid 0 00:23:45.078 [2024-12-09 11:58:52.734054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.078 [2024-12-09 11:58:52.734060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.078 [2024-12-09 11:58:52.734064] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.734068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0580) on tqpair=0x207e690 00:23:45.078 [2024-12-09 11:58:52.734077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.734081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.734085] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207e690) 00:23:45.078 [2024-12-09 11:58:52.734093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.078 [2024-12-09 11:58:52.734103] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0580, cid 3, qid 0 00:23:45.078 [2024-12-09 11:58:52.734297] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.078 [2024-12-09 11:58:52.734303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.078 [2024-12-09 11:58:52.734307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.734311] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0580) on tqpair=0x207e690 00:23:45.078 [2024-12-09 11:58:52.734320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.734324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.734328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207e690) 00:23:45.078 [2024-12-09 11:58:52.734335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.078 [2024-12-09 11:58:52.734345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0580, cid 3, qid 0 00:23:45.078 [2024-12-09 11:58:52.734551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.078 [2024-12-09 11:58:52.734557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.078 [2024-12-09 11:58:52.734560] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.734564] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0580) on tqpair=0x207e690 00:23:45.078 [2024-12-09 11:58:52.734574] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.734578] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.734582] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207e690) 00:23:45.078 [2024-12-09 11:58:52.734589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.078 [2024-12-09 11:58:52.734598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0580, cid 3, qid 0 00:23:45.078 [2024-12-09 11:58:52.734799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.078 [2024-12-09 11:58:52.734805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.078 [2024-12-09 11:58:52.734809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.734813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0580) on tqpair=0x207e690 00:23:45.078 [2024-12-09 11:58:52.734823] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.734827] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.734830] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207e690) 00:23:45.078 [2024-12-09 11:58:52.734837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.078 [2024-12-09 11:58:52.734847] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0580, cid 3, qid 0 00:23:45.078 [2024-12-09 11:58:52.735037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.078 [2024-12-09 11:58:52.735044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.078 [2024-12-09 11:58:52.735047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.735051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0580) on tqpair=0x207e690 00:23:45.078 [2024-12-09 11:58:52.735061] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.735065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.735068] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207e690) 00:23:45.078 [2024-12-09 11:58:52.735077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.078 [2024-12-09 11:58:52.735087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0580, cid 3, qid 0 00:23:45.078 [2024-12-09 11:58:52.735234] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.078 [2024-12-09 11:58:52.735241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.078 [2024-12-09 11:58:52.735244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.735248] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0580) on tqpair=0x207e690 00:23:45.078 [2024-12-09 11:58:52.735258] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.735262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.078 [2024-12-09 11:58:52.735265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207e690) 00:23:45.078 [2024-12-09 11:58:52.735272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.078 [2024-12-09 11:58:52.735282] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0580, cid 3, qid 0 00:23:45.079 [2024-12-09 11:58:52.735504] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.079 [2024-12-09 11:58:52.735510] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.079 [2024-12-09 11:58:52.735514] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.079 [2024-12-09 11:58:52.735518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0580) on tqpair=0x207e690 00:23:45.079 [2024-12-09 11:58:52.735527] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.079 [2024-12-09 11:58:52.735531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.079 [2024-12-09 11:58:52.735535] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207e690) 00:23:45.079 [2024-12-09 11:58:52.735542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.079 [2024-12-09 11:58:52.735551] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0580, cid 3, qid 0 00:23:45.079 [2024-12-09 11:58:52.735773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.079 [2024-12-09 11:58:52.735780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.079 [2024-12-09 11:58:52.735783] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.079 [2024-12-09 11:58:52.735787] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0580) on tqpair=0x207e690 00:23:45.079 [2024-12-09 11:58:52.735797] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.079 [2024-12-09 11:58:52.735801] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.079 [2024-12-09 11:58:52.735804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207e690) 00:23:45.079 [2024-12-09 11:58:52.735811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.079 [2024-12-09 11:58:52.735821] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0580, cid 3, qid 0 00:23:45.079 [2024-12-09 11:58:52.736024] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.079 [2024-12-09 11:58:52.736030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.079 [2024-12-09 11:58:52.736034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.079 [2024-12-09 11:58:52.736037] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0580) on tqpair=0x207e690 00:23:45.079 [2024-12-09 11:58:52.736047] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.079 [2024-12-09 11:58:52.736051] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.079 [2024-12-09 11:58:52.736055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207e690) 00:23:45.079 [2024-12-09 11:58:52.736061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.079 [2024-12-09 11:58:52.736074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0580, cid 3, qid 0 00:23:45.079 [2024-12-09 11:58:52.736292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.079 [2024-12-09 11:58:52.736298] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.079 [2024-12-09 11:58:52.736302] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.079 [2024-12-09 11:58:52.736306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0580) on tqpair=0x207e690 00:23:45.079 [2024-12-09 11:58:52.736315] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.079 [2024-12-09 11:58:52.736319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.079 [2024-12-09 11:58:52.736323] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207e690) 00:23:45.079 [2024-12-09 11:58:52.736330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.079 [2024-12-09 11:58:52.736340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0580, cid 3, qid 0 00:23:45.079 [2024-12-09 11:58:52.736511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.079 [2024-12-09 11:58:52.736517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.079 [2024-12-09 11:58:52.736520] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.079 [2024-12-09 11:58:52.736524] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0580) on tqpair=0x207e690 00:23:45.079 [2024-12-09 11:58:52.736534] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.079 [2024-12-09 11:58:52.736538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.079 [2024-12-09 11:58:52.736542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x207e690) 00:23:45.079 [2024-12-09 11:58:52.736548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.079 [2024-12-09 11:58:52.736558] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e0580, cid 3, qid 0 00:23:45.079 [2024-12-09 11:58:52.740647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.079 [2024-12-09 11:58:52.740657] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.079 [2024-12-09 11:58:52.740661] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.079 [2024-12-09 11:58:52.740665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20e0580) on tqpair=0x207e690 00:23:45.079 [2024-12-09 11:58:52.740673] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:23:45.079 0% 00:23:45.079 Data Units Read: 0 00:23:45.079 Data Units Written: 0 00:23:45.079 Host Read Commands: 0 00:23:45.079 Host Write Commands: 0 00:23:45.079 Controller Busy Time: 0 minutes 00:23:45.079 Power Cycles: 0 00:23:45.079 Power On Hours: 0 hours 00:23:45.079 Unsafe Shutdowns: 0 00:23:45.079 Unrecoverable Media Errors: 0 00:23:45.079 Lifetime Error Log Entries: 0 00:23:45.079 Warning Temperature Time: 0 minutes 00:23:45.079 Critical Temperature Time: 0 minutes 00:23:45.079 00:23:45.079 Number of Queues 00:23:45.079 ================ 00:23:45.079 Number of I/O Submission Queues: 127 00:23:45.079 Number of I/O Completion Queues: 127 00:23:45.079 00:23:45.079 Active Namespaces 00:23:45.079 ================= 00:23:45.079 Namespace ID:1 00:23:45.079 Error Recovery Timeout: Unlimited 00:23:45.079 Command Set Identifier: NVM (00h) 00:23:45.079 Deallocate: Supported 00:23:45.079 Deallocated/Unwritten Error: Not Supported 00:23:45.079 Deallocated Read Value: Unknown 00:23:45.079 Deallocate in Write Zeroes: Not Supported 00:23:45.079 Deallocated Guard Field: 0xFFFF 00:23:45.079 Flush: Supported 00:23:45.079 Reservation: Supported 00:23:45.079 Namespace Sharing Capabilities: Multiple Controllers 00:23:45.079 Size (in LBAs): 131072 (0GiB) 00:23:45.079 Capacity (in LBAs): 131072 (0GiB) 00:23:45.079 Utilization (in LBAs): 131072 (0GiB) 00:23:45.079 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:45.079 EUI64: ABCDEF0123456789 00:23:45.079 UUID: 0ca9d12e-17c0-497d-91d4-96f817fe5307 00:23:45.079 Thin Provisioning: Not Supported 00:23:45.079 Per-NS Atomic Units: Yes 00:23:45.079 Atomic Boundary Size (Normal): 0 00:23:45.079 Atomic Boundary Size (PFail): 0 00:23:45.079 Atomic Boundary Offset: 0 00:23:45.079 Maximum Single Source Range Length: 65535 00:23:45.079 Maximum Copy Length: 65535 00:23:45.079 Maximum Source Range Count: 1 00:23:45.079 NGUID/EUI64 Never Reused: No 00:23:45.079 Namespace Write Protected: No 00:23:45.079 Number of LBA Formats: 1 00:23:45.079 Current LBA Format: LBA Format #00 00:23:45.079 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:45.079 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # sync 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # set +e 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # for i in {1..20} 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:23:45.079 rmmod nvme_tcp 00:23:45.079 rmmod nvme_fabrics 00:23:45.079 rmmod nvme_keyring 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # set -e 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@130 -- # return 0 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 139201 ']' 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 139201 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 139201 ']' 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 139201 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 139201 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 139201' 00:23:45.079 killing process with pid 139201 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 139201 00:23:45.079 11:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 139201 00:23:45.341 11:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:45.341 11:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:45.341 11:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:45.341 11:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # iptr 00:23:45.341 11:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:23:45.341 11:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:45.341 11:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:23:45.341 11:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:45.341 11:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # remove_spdk_ns 00:23:45.341 11:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.341 11:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:45.341 11:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:23:47.888 00:23:47.888 real 0m11.362s 00:23:47.888 user 0m8.342s 00:23:47.888 sys 0m5.937s 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.888 ************************************ 00:23:47.888 END TEST nvmf_identify 00:23:47.888 ************************************ 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.888 ************************************ 00:23:47.888 START TEST nvmf_perf 00:23:47.888 ************************************ 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:47.888 * Looking for test storage... 00:23:47.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.888 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:47.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.889 --rc genhtml_branch_coverage=1 00:23:47.889 --rc genhtml_function_coverage=1 00:23:47.889 --rc genhtml_legend=1 00:23:47.889 --rc geninfo_all_blocks=1 00:23:47.889 --rc geninfo_unexecuted_blocks=1 00:23:47.889 00:23:47.889 ' 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:47.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.889 --rc genhtml_branch_coverage=1 00:23:47.889 --rc genhtml_function_coverage=1 00:23:47.889 --rc genhtml_legend=1 00:23:47.889 --rc geninfo_all_blocks=1 00:23:47.889 --rc geninfo_unexecuted_blocks=1 00:23:47.889 00:23:47.889 ' 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:47.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.889 --rc genhtml_branch_coverage=1 00:23:47.889 --rc genhtml_function_coverage=1 00:23:47.889 --rc genhtml_legend=1 00:23:47.889 --rc geninfo_all_blocks=1 00:23:47.889 --rc geninfo_unexecuted_blocks=1 00:23:47.889 00:23:47.889 ' 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:47.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.889 --rc genhtml_branch_coverage=1 00:23:47.889 --rc genhtml_function_coverage=1 00:23:47.889 --rc genhtml_legend=1 00:23:47.889 --rc geninfo_all_blocks=1 00:23:47.889 --rc geninfo_unexecuted_blocks=1 00:23:47.889 00:23:47.889 ' 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # : 0 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:23:47.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@56 -- # have_pci_nics=0 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # xtrace_disable 00:23:47.889 11:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_devs=() 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_devs 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_net_devs=() 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # pci_drivers=() 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # local -A pci_drivers 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # net_devs=() 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga net_devs 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # e810=() 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga e810 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # x722=() 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga x722 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@323 -- # mlx=() 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@323 -- # local -ga mlx 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:56.036 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:56.036 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:56.036 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:56.036 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # is_hw=yes 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.036 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:23:56.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:23:56.037 00:23:56.037 --- 10.0.0.2 ping statistics --- 00:23:56.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.037 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:23:56.037 00:23:56.037 --- 10.0.0.1 ping statistics --- 00:23:56.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.037 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # return 0 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=143594 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 143594 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 143594 ']' 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.037 11:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:56.037 [2024-12-09 11:59:02.959895] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:23:56.037 [2024-12-09 11:59:02.959962] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.037 [2024-12-09 11:59:03.061381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:56.037 [2024-12-09 11:59:03.114466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.037 [2024-12-09 11:59:03.114523] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.037 [2024-12-09 11:59:03.114532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.037 [2024-12-09 11:59:03.114539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.037 [2024-12-09 11:59:03.114545] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.037 [2024-12-09 11:59:03.116930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.037 [2024-12-09 11:59:03.117055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.037 [2024-12-09 11:59:03.117222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:56.037 [2024-12-09 11:59:03.117223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.037 11:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.037 11:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:23:56.037 11:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:56.037 11:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:56.037 11:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:56.037 11:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.037 11:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:56.037 11:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:56.609 11:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:56.609 11:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:56.869 11:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:23:56.869 11:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:56.869 11:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:56.869 11:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:23:56.869 11:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:56.870 11:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:56.870 11:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:57.130 [2024-12-09 11:59:04.853717] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.130 11:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:57.392 11:59:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:57.392 11:59:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:57.392 11:59:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:57.392 11:59:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:57.654 11:59:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:57.914 [2024-12-09 11:59:05.588431] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.914 11:59:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:58.174 11:59:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:23:58.174 11:59:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:58.174 11:59:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:58.174 11:59:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:59.558 Initializing NVMe Controllers 00:23:59.558 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:23:59.558 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:23:59.558 Initialization complete. Launching workers. 00:23:59.558 ======================================================== 00:23:59.558 Latency(us) 00:23:59.558 Device Information : IOPS MiB/s Average min max 00:23:59.558 PCIE (0000:65:00.0) NSID 1 from core 0: 77481.44 302.66 412.39 13.25 5255.76 00:23:59.558 ======================================================== 00:23:59.558 Total : 77481.44 302.66 412.39 13.25 5255.76 00:23:59.558 00:23:59.558 11:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:00.500 Initializing NVMe Controllers 00:24:00.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:00.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:00.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:00.500 Initialization complete. Launching workers. 00:24:00.500 ======================================================== 00:24:00.500 Latency(us) 00:24:00.500 Device Information : IOPS MiB/s Average min max 00:24:00.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 69.75 0.27 14383.99 278.24 45651.58 00:24:00.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.79 0.22 17747.06 7055.56 48887.08 00:24:00.500 ======================================================== 00:24:00.500 Total : 126.54 0.49 15893.40 278.24 48887.08 00:24:00.500 00:24:00.500 11:59:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:01.887 Initializing NVMe Controllers 00:24:01.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:01.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:01.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:01.887 Initialization complete. Launching workers. 00:24:01.887 ======================================================== 00:24:01.887 Latency(us) 00:24:01.887 Device Information : IOPS MiB/s Average min max 00:24:01.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11740.25 45.86 2725.92 459.24 6897.99 00:24:01.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3495.67 13.65 9168.98 5638.02 17828.61 00:24:01.887 ======================================================== 00:24:01.887 Total : 15235.92 59.52 4204.19 459.24 17828.61 00:24:01.887 00:24:01.887 11:59:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:01.887 11:59:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:01.887 11:59:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:04.432 Initializing NVMe Controllers 00:24:04.432 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:04.432 Controller IO queue size 128, less than required. 00:24:04.432 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:04.432 Controller IO queue size 128, less than required. 00:24:04.432 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:04.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:04.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:04.432 Initialization complete. Launching workers. 00:24:04.432 ======================================================== 00:24:04.432 Latency(us) 00:24:04.432 Device Information : IOPS MiB/s Average min max 00:24:04.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1903.48 475.87 67983.38 33671.14 112745.80 00:24:04.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 609.00 152.25 220173.98 63778.54 299006.65 00:24:04.432 ======================================================== 00:24:04.432 Total : 2512.48 628.12 104872.56 33671.14 299006.65 00:24:04.432 00:24:04.432 11:59:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:04.693 No valid NVMe controllers or AIO or URING devices found 00:24:04.693 Initializing NVMe Controllers 00:24:04.693 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:04.693 Controller IO queue size 128, less than required. 00:24:04.693 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:04.693 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:04.693 Controller IO queue size 128, less than required. 00:24:04.693 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:04.693 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:04.693 WARNING: Some requested NVMe devices were skipped 00:24:04.693 11:59:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:07.239 Initializing NVMe Controllers 00:24:07.239 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:07.239 Controller IO queue size 128, less than required. 00:24:07.239 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.239 Controller IO queue size 128, less than required. 00:24:07.239 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.239 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:07.239 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:07.239 Initialization complete. Launching workers. 00:24:07.239 00:24:07.239 ==================== 00:24:07.239 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:07.239 TCP transport: 00:24:07.239 polls: 37873 00:24:07.239 idle_polls: 23932 00:24:07.239 sock_completions: 13941 00:24:07.239 nvme_completions: 6901 00:24:07.239 submitted_requests: 10328 00:24:07.239 queued_requests: 1 00:24:07.239 00:24:07.239 ==================== 00:24:07.239 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:07.239 TCP transport: 00:24:07.239 polls: 38455 00:24:07.239 idle_polls: 23511 00:24:07.239 sock_completions: 14944 00:24:07.239 nvme_completions: 7791 00:24:07.239 submitted_requests: 11702 00:24:07.239 queued_requests: 1 00:24:07.239 ======================================================== 00:24:07.239 Latency(us) 00:24:07.239 Device Information : IOPS MiB/s Average min max 00:24:07.239 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1722.81 430.70 75666.10 41975.94 144425.10 00:24:07.239 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1945.02 486.26 66519.37 30695.83 124734.04 00:24:07.239 ======================================================== 00:24:07.239 Total : 3667.83 916.96 70815.66 30695.83 144425.10 00:24:07.239 00:24:07.239 11:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:07.239 11:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:07.239 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:07.239 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:07.239 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:07.239 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:07.239 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # sync 00:24:07.239 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:24:07.239 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # set +e 00:24:07.239 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # for i in {1..20} 00:24:07.239 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:24:07.239 rmmod nvme_tcp 00:24:07.239 rmmod nvme_fabrics 00:24:07.499 rmmod nvme_keyring 00:24:07.499 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:24:07.499 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # set -e 00:24:07.499 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@130 -- # return 0 00:24:07.499 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 143594 ']' 00:24:07.499 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 143594 00:24:07.499 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 143594 ']' 00:24:07.499 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 143594 00:24:07.499 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:24:07.499 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:07.499 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 143594 00:24:07.499 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:07.500 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:07.500 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 143594' 00:24:07.500 killing process with pid 143594 00:24:07.500 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 143594 00:24:07.500 11:59:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 143594 00:24:09.412 11:59:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:09.412 11:59:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:09.412 11:59:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:09.412 11:59:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # iptr 00:24:09.412 11:59:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:24:09.412 11:59:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:09.412 11:59:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:24:09.412 11:59:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:09.412 11:59:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # remove_spdk_ns 00:24:09.412 11:59:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.412 11:59:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.412 11:59:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.958 11:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:24:11.958 00:24:11.958 real 0m24.035s 00:24:11.958 user 0m57.594s 00:24:11.958 sys 0m8.623s 00:24:11.958 11:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:11.958 11:59:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:11.958 ************************************ 00:24:11.958 END TEST nvmf_perf 00:24:11.958 ************************************ 00:24:11.958 11:59:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:11.958 11:59:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:11.958 11:59:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:11.958 11:59:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.958 ************************************ 00:24:11.958 START TEST nvmf_fio_host 00:24:11.958 ************************************ 00:24:11.958 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:11.958 * Looking for test storage... 00:24:11.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:11.958 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:11.958 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:11.958 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:11.958 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:11.958 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:11.958 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:11.958 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:11.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.959 --rc genhtml_branch_coverage=1 00:24:11.959 --rc genhtml_function_coverage=1 00:24:11.959 --rc genhtml_legend=1 00:24:11.959 --rc geninfo_all_blocks=1 00:24:11.959 --rc geninfo_unexecuted_blocks=1 00:24:11.959 00:24:11.959 ' 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:11.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.959 --rc genhtml_branch_coverage=1 00:24:11.959 --rc genhtml_function_coverage=1 00:24:11.959 --rc genhtml_legend=1 00:24:11.959 --rc geninfo_all_blocks=1 00:24:11.959 --rc geninfo_unexecuted_blocks=1 00:24:11.959 00:24:11.959 ' 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:11.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.959 --rc genhtml_branch_coverage=1 00:24:11.959 --rc genhtml_function_coverage=1 00:24:11.959 --rc genhtml_legend=1 00:24:11.959 --rc geninfo_all_blocks=1 00:24:11.959 --rc geninfo_unexecuted_blocks=1 00:24:11.959 00:24:11.959 ' 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:11.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.959 --rc genhtml_branch_coverage=1 00:24:11.959 --rc genhtml_function_coverage=1 00:24:11.959 --rc genhtml_legend=1 00:24:11.959 --rc geninfo_all_blocks=1 00:24:11.959 --rc geninfo_unexecuted_blocks=1 00:24:11.959 00:24:11.959 ' 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.959 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # : 0 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:24:11.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@56 -- # have_pci_nics=0 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # xtrace_disable 00:24:11.960 11:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_devs=() 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_devs 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_net_devs=() 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # pci_drivers=() 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # local -A pci_drivers 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # net_devs=() 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga net_devs 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # e810=() 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga e810 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # x722=() 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga x722 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@323 -- # mlx=() 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@323 -- # local -ga mlx 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:20.108 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:20.108 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:20.108 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:20.108 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # is_hw=yes 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:24:20.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:24:20.108 00:24:20.108 --- 10.0.0.2 ping statistics --- 00:24:20.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.108 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:24:20.108 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:20.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:24:20.108 00:24:20.108 --- 10.0.0.1 ping statistics --- 00:24:20.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.109 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:24:20.109 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.109 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # return 0 00:24:20.109 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:20.109 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.109 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:20.109 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:20.109 11:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.109 11:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:20.109 11:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:20.109 11:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:20.109 11:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:20.109 11:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:20.109 11:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.109 11:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=150650 00:24:20.109 11:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:20.109 11:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 150650 00:24:20.109 11:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 150650 ']' 00:24:20.109 11:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.109 11:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.109 11:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.109 11:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.109 11:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.109 11:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:20.109 [2024-12-09 11:59:27.118949] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:24:20.109 [2024-12-09 11:59:27.119016] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.109 [2024-12-09 11:59:27.218764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:20.109 [2024-12-09 11:59:27.271453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.109 [2024-12-09 11:59:27.271506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.109 [2024-12-09 11:59:27.271515] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.109 [2024-12-09 11:59:27.271522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.109 [2024-12-09 11:59:27.271528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.109 [2024-12-09 11:59:27.273720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.109 [2024-12-09 11:59:27.273901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.109 [2024-12-09 11:59:27.274044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:20.109 [2024-12-09 11:59:27.274045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.109 11:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:20.109 11:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:20.109 11:59:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:20.370 [2024-12-09 11:59:28.064414] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.370 11:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:20.370 11:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:20.370 11:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.370 11:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:20.631 Malloc1 00:24:20.631 11:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:20.891 11:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:20.892 11:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:21.152 [2024-12-09 11:59:28.872537] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.152 11:59:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:21.413 11:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:21.414 11:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:21.414 11:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:21.414 11:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:21.414 11:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:21.414 11:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:21.414 11:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:21.414 11:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:21.414 11:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:21.414 11:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:21.414 11:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:21.414 11:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:21.414 11:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:21.414 11:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:21.414 11:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:21.414 11:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:21.414 11:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:21.414 11:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:21.414 11:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:21.414 11:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:21.414 11:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:21.414 11:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:21.414 11:59:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:21.674 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:21.674 fio-3.35 00:24:21.674 Starting 1 thread 00:24:24.241 00:24:24.241 test: (groupid=0, jobs=1): err= 0: pid=151196: Mon Dec 9 11:59:31 2024 00:24:24.241 read: IOPS=13.7k, BW=53.6MiB/s (56.2MB/s)(107MiB/2004msec) 00:24:24.241 slat (usec): min=2, max=273, avg= 2.21, stdev= 2.41 00:24:24.241 clat (usec): min=3088, max=10139, avg=5138.65, stdev=502.88 00:24:24.241 lat (usec): min=3090, max=10145, avg=5140.86, stdev=503.25 00:24:24.241 clat percentiles (usec): 00:24:24.241 | 1.00th=[ 4228], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:24:24.241 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5211], 00:24:24.241 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5669], 00:24:24.241 | 99.00th=[ 6980], 99.50th=[ 8586], 99.90th=[ 9634], 99.95th=[ 9896], 00:24:24.241 | 99.99th=[10028] 00:24:24.241 bw ( KiB/s): min=53328, max=55712, per=99.94%, avg=54860.00, stdev=1091.63, samples=4 00:24:24.241 iops : min=13332, max=13928, avg=13714.50, stdev=272.93, samples=4 00:24:24.241 write: IOPS=13.7k, BW=53.5MiB/s (56.1MB/s)(107MiB/2004msec); 0 zone resets 00:24:24.241 slat (usec): min=2, max=269, avg= 2.27, stdev= 1.85 00:24:24.241 clat (usec): min=2566, max=8972, avg=4165.72, stdev=471.47 00:24:24.241 lat (usec): min=2568, max=8974, avg=4168.00, stdev=471.92 00:24:24.241 clat percentiles (usec): 00:24:24.241 | 1.00th=[ 3425], 5.00th=[ 3654], 10.00th=[ 3785], 20.00th=[ 3916], 00:24:24.241 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:24:24.241 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4621], 00:24:24.241 | 99.00th=[ 6325], 99.50th=[ 7635], 99.90th=[ 8225], 99.95th=[ 8356], 00:24:24.241 | 99.99th=[ 8586] 00:24:24.241 bw ( KiB/s): min=53776, max=55240, per=100.00%, avg=54788.00, stdev=682.97, samples=4 00:24:24.241 iops : min=13444, max=13810, avg=13697.00, stdev=170.74, samples=4 00:24:24.241 lat (msec) : 4=16.00%, 10=83.99%, 20=0.01% 00:24:24.241 cpu : usr=79.28%, sys=19.72%, ctx=29, majf=0, minf=16 00:24:24.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:24.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:24.241 issued rwts: total=27502,27449,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:24.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:24.241 00:24:24.241 Run status group 0 (all jobs): 00:24:24.241 READ: bw=53.6MiB/s (56.2MB/s), 53.6MiB/s-53.6MiB/s (56.2MB/s-56.2MB/s), io=107MiB (113MB), run=2004-2004msec 00:24:24.241 WRITE: bw=53.5MiB/s (56.1MB/s), 53.5MiB/s-53.5MiB/s (56.1MB/s-56.1MB/s), io=107MiB (112MB), run=2004-2004msec 00:24:24.241 11:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:24.241 11:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:24.241 11:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:24.241 11:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:24.241 11:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:24.241 11:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:24.241 11:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:24.241 11:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:24.241 11:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:24.241 11:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:24.241 11:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:24.241 11:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:24.241 11:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:24.241 11:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:24.241 11:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:24.241 11:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:24.241 11:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:24.241 11:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:24.241 11:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:24.241 11:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:24.241 11:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:24.241 11:59:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:24.509 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:24.509 fio-3.35 00:24:24.509 Starting 1 thread 00:24:27.053 00:24:27.053 test: (groupid=0, jobs=1): err= 0: pid=152012: Mon Dec 9 11:59:34 2024 00:24:27.053 read: IOPS=9486, BW=148MiB/s (155MB/s)(297MiB/2005msec) 00:24:27.053 slat (usec): min=3, max=110, avg= 3.62, stdev= 1.59 00:24:27.053 clat (usec): min=2140, max=15258, avg=8243.01, stdev=1975.89 00:24:27.053 lat (usec): min=2143, max=15262, avg=8246.63, stdev=1976.01 00:24:27.053 clat percentiles (usec): 00:24:27.053 | 1.00th=[ 4080], 5.00th=[ 5145], 10.00th=[ 5735], 20.00th=[ 6521], 00:24:27.053 | 30.00th=[ 7111], 40.00th=[ 7635], 50.00th=[ 8160], 60.00th=[ 8717], 00:24:27.053 | 70.00th=[ 9241], 80.00th=[10159], 90.00th=[10814], 95.00th=[11338], 00:24:27.053 | 99.00th=[12911], 99.50th=[13566], 99.90th=[14877], 99.95th=[15008], 00:24:27.053 | 99.99th=[15270] 00:24:27.053 bw ( KiB/s): min=70688, max=83392, per=49.35%, avg=74912.00, stdev=5759.35, samples=4 00:24:27.053 iops : min= 4418, max= 5212, avg=4682.00, stdev=359.96, samples=4 00:24:27.053 write: IOPS=5547, BW=86.7MiB/s (90.9MB/s)(153MiB/1770msec); 0 zone resets 00:24:27.053 slat (usec): min=39, max=328, avg=40.95, stdev= 7.18 00:24:27.053 clat (usec): min=3522, max=14626, avg=9018.26, stdev=1393.61 00:24:27.053 lat (usec): min=3562, max=14762, avg=9059.21, stdev=1395.36 00:24:27.053 clat percentiles (usec): 00:24:27.053 | 1.00th=[ 5932], 5.00th=[ 6915], 10.00th=[ 7308], 20.00th=[ 7832], 00:24:27.053 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:24:27.053 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[10814], 95.00th=[11338], 00:24:27.053 | 99.00th=[12780], 99.50th=[13304], 99.90th=[14353], 99.95th=[14484], 00:24:27.053 | 99.99th=[14615] 00:24:27.053 bw ( KiB/s): min=73216, max=86816, per=87.99%, avg=78096.00, stdev=6054.04, samples=4 00:24:27.053 iops : min= 4576, max= 5426, avg=4881.00, stdev=378.38, samples=4 00:24:27.053 lat (msec) : 4=0.61%, 10=77.67%, 20=21.72% 00:24:27.054 cpu : usr=85.03%, sys=13.42%, ctx=16, majf=0, minf=28 00:24:27.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:27.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:27.054 issued rwts: total=19021,9819,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:27.054 00:24:27.054 Run status group 0 (all jobs): 00:24:27.054 READ: bw=148MiB/s (155MB/s), 148MiB/s-148MiB/s (155MB/s-155MB/s), io=297MiB (312MB), run=2005-2005msec 00:24:27.054 WRITE: bw=86.7MiB/s (90.9MB/s), 86.7MiB/s-86.7MiB/s (90.9MB/s-90.9MB/s), io=153MiB (161MB), run=1770-1770msec 00:24:27.054 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:27.054 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:27.054 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:27.054 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:27.054 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:27.054 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:27.054 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # sync 00:24:27.054 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:24:27.054 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # set +e 00:24:27.054 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # for i in {1..20} 00:24:27.054 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:24:27.054 rmmod nvme_tcp 00:24:27.054 rmmod nvme_fabrics 00:24:27.054 rmmod nvme_keyring 00:24:27.054 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:24:27.054 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # set -e 00:24:27.054 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@130 -- # return 0 00:24:27.054 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 150650 ']' 00:24:27.054 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 150650 00:24:27.054 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 150650 ']' 00:24:27.054 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 150650 00:24:27.054 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:24:27.054 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.054 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 150650 00:24:27.313 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:27.313 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:27.314 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 150650' 00:24:27.314 killing process with pid 150650 00:24:27.314 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 150650 00:24:27.314 11:59:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 150650 00:24:27.314 11:59:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:27.314 11:59:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:27.314 11:59:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:27.314 11:59:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # iptr 00:24:27.314 11:59:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:27.314 11:59:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:24:27.314 11:59:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:24:27.314 11:59:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:27.314 11:59:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # remove_spdk_ns 00:24:27.314 11:59:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.314 11:59:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.314 11:59:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.856 11:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:24:29.856 00:24:29.856 real 0m17.780s 00:24:29.856 user 1m13.293s 00:24:29.856 sys 0m7.463s 00:24:29.856 11:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:29.856 11:59:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.856 ************************************ 00:24:29.856 END TEST nvmf_fio_host 00:24:29.856 ************************************ 00:24:29.856 11:59:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:29.856 11:59:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:29.856 11:59:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:29.856 11:59:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.856 ************************************ 00:24:29.857 START TEST nvmf_failover 00:24:29.857 ************************************ 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:29.857 * Looking for test storage... 00:24:29.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:29.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.857 --rc genhtml_branch_coverage=1 00:24:29.857 --rc genhtml_function_coverage=1 00:24:29.857 --rc genhtml_legend=1 00:24:29.857 --rc geninfo_all_blocks=1 00:24:29.857 --rc geninfo_unexecuted_blocks=1 00:24:29.857 00:24:29.857 ' 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:29.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.857 --rc genhtml_branch_coverage=1 00:24:29.857 --rc genhtml_function_coverage=1 00:24:29.857 --rc genhtml_legend=1 00:24:29.857 --rc geninfo_all_blocks=1 00:24:29.857 --rc geninfo_unexecuted_blocks=1 00:24:29.857 00:24:29.857 ' 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:29.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.857 --rc genhtml_branch_coverage=1 00:24:29.857 --rc genhtml_function_coverage=1 00:24:29.857 --rc genhtml_legend=1 00:24:29.857 --rc geninfo_all_blocks=1 00:24:29.857 --rc geninfo_unexecuted_blocks=1 00:24:29.857 00:24:29.857 ' 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:29.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.857 --rc genhtml_branch_coverage=1 00:24:29.857 --rc genhtml_function_coverage=1 00:24:29.857 --rc genhtml_legend=1 00:24:29.857 --rc geninfo_all_blocks=1 00:24:29.857 --rc geninfo_unexecuted_blocks=1 00:24:29.857 00:24:29.857 ' 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # : 0 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:24:29.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@56 -- # have_pci_nics=0 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:29.857 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:29.858 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:29.858 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:29.858 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:29.858 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:29.858 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.858 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.858 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.858 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:29.858 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:29.858 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # xtrace_disable 00:24:29.858 11:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_devs=() 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_devs 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_net_devs=() 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # pci_drivers=() 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # local -A pci_drivers 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # net_devs=() 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga net_devs 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # e810=() 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga e810 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # x722=() 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga x722 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@323 -- # mlx=() 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@323 -- # local -ga mlx 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:37.997 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:37.997 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:37.997 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:37.997 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:37.998 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # is_hw=yes 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:24:37.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:24:37.998 00:24:37.998 --- 10.0.0.2 ping statistics --- 00:24:37.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.998 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:24:37.998 00:24:37.998 --- 10.0.0.1 ping statistics --- 00:24:37.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.998 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # return 0 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=156671 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 156671 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 156671 ']' 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:37.998 11:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:37.998 [2024-12-09 11:59:44.905115] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:24:37.998 [2024-12-09 11:59:44.905183] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.998 [2024-12-09 11:59:45.002094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:37.998 [2024-12-09 11:59:45.052967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.998 [2024-12-09 11:59:45.053019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.998 [2024-12-09 11:59:45.053028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.998 [2024-12-09 11:59:45.053035] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.998 [2024-12-09 11:59:45.053041] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.998 [2024-12-09 11:59:45.054837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.998 [2024-12-09 11:59:45.055125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:37.998 [2024-12-09 11:59:45.055126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.998 11:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:37.998 11:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:37.998 11:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:37.998 11:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:37.998 11:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:37.998 11:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.998 11:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:38.259 [2024-12-09 11:59:45.911564] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.259 11:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:38.259 Malloc0 00:24:38.519 11:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:38.519 11:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:38.779 11:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.779 [2024-12-09 11:59:46.662079] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.040 11:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:39.040 [2024-12-09 11:59:46.846579] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:39.040 11:59:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:39.300 [2024-12-09 11:59:47.023119] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:39.300 11:59:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:39.300 11:59:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=157037 00:24:39.300 11:59:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:39.300 11:59:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 157037 /var/tmp/bdevperf.sock 00:24:39.300 11:59:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 157037 ']' 00:24:39.300 11:59:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:39.300 11:59:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.300 11:59:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:39.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:39.300 11:59:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.300 11:59:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:40.241 11:59:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:40.241 11:59:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:40.241 11:59:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:40.502 NVMe0n1 00:24:40.502 11:59:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:40.762 00:24:40.762 11:59:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=157372 00:24:40.762 11:59:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:40.762 11:59:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:41.702 11:59:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:41.963 [2024-12-09 11:59:49.639374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.963 [2024-12-09 11:59:49.639416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.963 [2024-12-09 11:59:49.639422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.963 [2024-12-09 11:59:49.639427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.963 [2024-12-09 11:59:49.639432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.963 [2024-12-09 11:59:49.639437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.963 [2024-12-09 11:59:49.639442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.963 [2024-12-09 11:59:49.639446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.963 [2024-12-09 11:59:49.639451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.963 [2024-12-09 11:59:49.639455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.963 [2024-12-09 11:59:49.639460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.963 [2024-12-09 11:59:49.639464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.963 [2024-12-09 11:59:49.639469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.963 [2024-12-09 11:59:49.639473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.963 [2024-12-09 11:59:49.639478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.963 [2024-12-09 11:59:49.639482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.963 [2024-12-09 11:59:49.639487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 [2024-12-09 11:59:49.639641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc27ed0 is same with the state(6) to be set 00:24:41.964 11:59:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:45.263 11:59:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:45.263 00:24:45.263 11:59:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:45.523 11:59:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:48.821 11:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:48.821 [2024-12-09 11:59:56.410071] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.821 11:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:49.763 11:59:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:49.763 [2024-12-09 11:59:57.601160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaee140 is same with the state(6) to be set 00:24:49.763 11:59:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 157372 00:24:56.354 { 00:24:56.354 "results": [ 00:24:56.354 { 00:24:56.354 "job": "NVMe0n1", 00:24:56.354 "core_mask": "0x1", 00:24:56.354 "workload": "verify", 00:24:56.354 "status": "finished", 00:24:56.354 "verify_range": { 00:24:56.354 "start": 0, 00:24:56.354 "length": 16384 00:24:56.354 }, 00:24:56.354 "queue_depth": 128, 00:24:56.354 "io_size": 4096, 00:24:56.354 "runtime": 15.004125, 00:24:56.354 "iops": 12378.52923779294, 00:24:56.354 "mibps": 48.35362983512867, 00:24:56.354 "io_failed": 8277, 00:24:56.354 "io_timeout": 0, 00:24:56.354 "avg_latency_us": 9878.36229834816, 00:24:56.354 "min_latency_us": 542.72, 00:24:56.354 "max_latency_us": 13817.173333333334 00:24:56.354 } 00:24:56.354 ], 00:24:56.354 "core_count": 1 00:24:56.354 } 00:24:56.354 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 157037 00:24:56.354 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 157037 ']' 00:24:56.354 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 157037 00:24:56.354 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:56.354 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:56.354 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 157037 00:24:56.354 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:56.354 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:56.354 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 157037' 00:24:56.354 killing process with pid 157037 00:24:56.354 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 157037 00:24:56.354 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 157037 00:24:56.354 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:56.354 [2024-12-09 11:59:47.095829] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:24:56.354 [2024-12-09 11:59:47.095888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157037 ] 00:24:56.354 [2024-12-09 11:59:47.189301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.354 [2024-12-09 11:59:47.224787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.354 Running I/O for 15 seconds... 00:24:56.354 11079.00 IOPS, 43.28 MiB/s [2024-12-09T11:00:04.240Z] [2024-12-09 11:59:49.640877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.354 [2024-12-09 11:59:49.640910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.354 [2024-12-09 11:59:49.640922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.354 [2024-12-09 11:59:49.640930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.354 [2024-12-09 11:59:49.640939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.354 [2024-12-09 11:59:49.640948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.354 [2024-12-09 11:59:49.640956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.354 [2024-12-09 11:59:49.640963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.354 [2024-12-09 11:59:49.640971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b9d0 is same with the state(6) to be set 00:24:56.354 [2024-12-09 11:59:49.641016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.354 [2024-12-09 11:59:49.641027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.354 [2024-12-09 11:59:49.641040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.354 [2024-12-09 11:59:49.641049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.354 [2024-12-09 11:59:49.641059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.354 [2024-12-09 11:59:49.641066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.354 [2024-12-09 11:59:49.641076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.354 [2024-12-09 11:59:49.641083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.354 [2024-12-09 11:59:49.641092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.354 [2024-12-09 11:59:49.641100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.354 [2024-12-09 11:59:49.641110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.354 [2024-12-09 11:59:49.641117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.354 [2024-12-09 11:59:49.641126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.354 [2024-12-09 11:59:49.641139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.354 [2024-12-09 11:59:49.641149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.354 [2024-12-09 11:59:49.641156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.354 [2024-12-09 11:59:49.641166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.354 [2024-12-09 11:59:49.641173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.354 [2024-12-09 11:59:49.641183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.354 [2024-12-09 11:59:49.641190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.354 [2024-12-09 11:59:49.641200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.354 [2024-12-09 11:59:49.641208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.354 [2024-12-09 11:59:49.641217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.354 [2024-12-09 11:59:49.641225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.354 [2024-12-09 11:59:49.641234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.354 [2024-12-09 11:59:49.641242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.354 [2024-12-09 11:59:49.641251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.354 [2024-12-09 11:59:49.641259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.354 [2024-12-09 11:59:49.641268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.354 [2024-12-09 11:59:49.641275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.354 [2024-12-09 11:59:49.641285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.354 [2024-12-09 11:59:49.641292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.355 [2024-12-09 11:59:49.641309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.355 [2024-12-09 11:59:49.641325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.355 [2024-12-09 11:59:49.641342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.355 [2024-12-09 11:59:49.641361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.355 [2024-12-09 11:59:49.641970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.355 [2024-12-09 11:59:49.641977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.641987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.641994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.356 [2024-12-09 11:59:49.642650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.356 [2024-12-09 11:59:49.642659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.642668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.642676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.642685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.642692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.642701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.642708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.642718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.642726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.642736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.642743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.642752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.642760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.642770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.642777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.642786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.642793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.642803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.642811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.642820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.642828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.642837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.642846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.642856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.642865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.642875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.642882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.642892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.642899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.642909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.642916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.642926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.642933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.642942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.642950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.642959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.642966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.642975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.642983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.642992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.642999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.643008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.643016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.643025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.643033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.643042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.643050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.643059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.643066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.643077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:49.643085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.643094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-12-09 11:59:49.643101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.643110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-12-09 11:59:49.643117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.643127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-12-09 11:59:49.643134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.643145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-12-09 11:59:49.643152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.643162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-12-09 11:59:49.643169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.643179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.357 [2024-12-09 11:59:49.643187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.643206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.357 [2024-12-09 11:59:49.643212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.357 [2024-12-09 11:59:49.643219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94880 len:8 PRP1 0x0 PRP2 0x0 00:24:56.357 [2024-12-09 11:59:49.643227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:49.643266] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:56.357 [2024-12-09 11:59:49.643276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:56.357 [2024-12-09 11:59:49.646813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:56.357 [2024-12-09 11:59:49.646839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140b9d0 (9): Bad file descriptor 00:24:56.357 [2024-12-09 11:59:49.717104] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:56.357 9938.00 IOPS, 38.82 MiB/s [2024-12-09T11:00:04.243Z] 10697.67 IOPS, 41.79 MiB/s [2024-12-09T11:00:04.243Z] 11249.25 IOPS, 43.94 MiB/s [2024-12-09T11:00:04.243Z] [2024-12-09 11:59:53.220388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:53.220432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:53.220444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:53.220456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:53.220463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:53.220468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.357 [2024-12-09 11:59:53.220475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.357 [2024-12-09 11:59:53.220480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.358 [2024-12-09 11:59:53.220796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.358 [2024-12-09 11:59:53.220807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:54672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.358 [2024-12-09 11:59:53.220819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.358 [2024-12-09 11:59:53.220831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.358 [2024-12-09 11:59:53.220843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.358 [2024-12-09 11:59:53.220856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.358 [2024-12-09 11:59:53.220868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.358 [2024-12-09 11:59:53.220875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.358 [2024-12-09 11:59:53.220880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.220887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:54720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.220892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.220898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.220904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.220911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:54736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.220917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.220924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:54744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.220929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.220936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:54752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.220941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.220948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.220953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.220960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:54768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.220965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.220972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.220977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.220983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:54784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.220989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.220995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:54792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:54832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:54864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:54904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.359 [2024-12-09 11:59:53.221352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.359 [2024-12-09 11:59:53.221358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:55088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:55184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.360 [2024-12-09 11:59:53.221611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.360 [2024-12-09 11:59:53.221623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.360 [2024-12-09 11:59:53.221634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.360 [2024-12-09 11:59:53.221649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.360 [2024-12-09 11:59:53.221662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.360 [2024-12-09 11:59:53.221766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:55272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:55304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.360 [2024-12-09 11:59:53.221825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.360 [2024-12-09 11:59:53.221831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.361 [2024-12-09 11:59:53.221836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:53.221843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.361 [2024-12-09 11:59:53.221848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:53.221854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.361 [2024-12-09 11:59:53.221859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:53.221866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.361 [2024-12-09 11:59:53.221871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:53.221878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.361 [2024-12-09 11:59:53.221883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:53.221890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.361 [2024-12-09 11:59:53.221895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:53.221901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.361 [2024-12-09 11:59:53.221906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:53.221913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:55368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.361 [2024-12-09 11:59:53.221917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:53.221924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.361 [2024-12-09 11:59:53.221929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:53.221936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:55384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.361 [2024-12-09 11:59:53.221941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:53.221947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1437690 is same with the state(6) to be set 00:24:56.361 [2024-12-09 11:59:53.221957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.361 [2024-12-09 11:59:53.221961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.361 [2024-12-09 11:59:53.221966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55392 len:8 PRP1 0x0 PRP2 0x0 00:24:56.361 [2024-12-09 11:59:53.221971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:53.222005] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:56.361 [2024-12-09 11:59:53.222022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.361 [2024-12-09 11:59:53.222028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:53.222035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.361 [2024-12-09 11:59:53.222040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:53.222046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.361 [2024-12-09 11:59:53.222050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:53.222056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.361 [2024-12-09 11:59:53.222061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:53.222067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:56.361 [2024-12-09 11:59:53.224519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:56.361 [2024-12-09 11:59:53.224539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140b9d0 (9): Bad file descriptor 00:24:56.361 [2024-12-09 11:59:53.256097] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:24:56.361 11504.00 IOPS, 44.94 MiB/s [2024-12-09T11:00:04.247Z] 11779.83 IOPS, 46.01 MiB/s [2024-12-09T11:00:04.247Z] 11932.00 IOPS, 46.61 MiB/s [2024-12-09T11:00:04.247Z] 12048.50 IOPS, 47.06 MiB/s [2024-12-09T11:00:04.247Z] 12133.89 IOPS, 47.40 MiB/s [2024-12-09T11:00:04.247Z] [2024-12-09 11:59:57.602640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.361 [2024-12-09 11:59:57.602670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:57.602683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.361 [2024-12-09 11:59:57.602689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:57.602696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.361 [2024-12-09 11:59:57.602702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:57.602709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.361 [2024-12-09 11:59:57.602714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:57.602721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.361 [2024-12-09 11:59:57.602732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:57.602738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.361 [2024-12-09 11:59:57.602744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:57.602751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.361 [2024-12-09 11:59:57.602756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:57.602763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.361 [2024-12-09 11:59:57.602768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:57.602775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.361 [2024-12-09 11:59:57.602780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:57.602787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.361 [2024-12-09 11:59:57.602793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:57.602800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.361 [2024-12-09 11:59:57.602805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:57.602811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:125984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.361 [2024-12-09 11:59:57.602817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:57.602823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:125992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.361 [2024-12-09 11:59:57.602828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:57.602835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.361 [2024-12-09 11:59:57.602840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:57.602847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.361 [2024-12-09 11:59:57.602852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:57.602859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:126016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.361 [2024-12-09 11:59:57.602865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:57.602872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.361 [2024-12-09 11:59:57.602877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:57.602885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.361 [2024-12-09 11:59:57.602890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:57.602897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:126040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.361 [2024-12-09 11:59:57.602902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:57.602909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:126048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.361 [2024-12-09 11:59:57.602914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.361 [2024-12-09 11:59:57.602921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.361 [2024-12-09 11:59:57.602927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.602933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.602938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.602945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:126072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.602951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.602957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.602962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.602968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.602974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.602980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:126096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.602985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.602992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.602997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:126128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:126152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:126200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:126208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:126232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:126264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:126280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:126312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:126352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:126360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.362 [2024-12-09 11:59:57.603390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.362 [2024-12-09 11:59:57.603395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:126416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:126432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:126488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:126512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:126520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:126528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:126536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:126568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:126592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:126600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:126616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:126640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.363 [2024-12-09 11:59:57.603797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.363 [2024-12-09 11:59:57.603803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.603808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.603814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.603820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.603827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.603831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.603838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:126680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.603844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.603850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.603855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.603862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:126696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.603867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.603873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.603878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.603885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.603890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.603897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.603902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.603908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.603913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.603922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.603927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.603933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.603939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.603945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.603950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.603956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.603961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.603968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.603974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.603981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.603986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.603992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.603997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.604003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.604008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.604015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.604020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.604026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:126808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.604032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.604038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:126816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.604043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.604049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.604054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.604060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.604065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.604073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.604078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.604085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.604090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.604096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.364 [2024-12-09 11:59:57.604101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.604120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.364 [2024-12-09 11:59:57.604126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126864 len:8 PRP1 0x0 PRP2 0x0 00:24:56.364 [2024-12-09 11:59:57.604131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.604139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.364 [2024-12-09 11:59:57.604143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.364 [2024-12-09 11:59:57.604148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126872 len:8 PRP1 0x0 PRP2 0x0 00:24:56.364 [2024-12-09 11:59:57.604152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.604158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.364 [2024-12-09 11:59:57.604161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.364 [2024-12-09 11:59:57.604166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126880 len:8 PRP1 0x0 PRP2 0x0 00:24:56.364 [2024-12-09 11:59:57.604170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.604176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.364 [2024-12-09 11:59:57.604180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.364 [2024-12-09 11:59:57.604185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126888 len:8 PRP1 0x0 PRP2 0x0 00:24:56.364 [2024-12-09 11:59:57.604190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.604195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.364 [2024-12-09 11:59:57.604201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.364 [2024-12-09 11:59:57.604207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126896 len:8 PRP1 0x0 PRP2 0x0 00:24:56.364 [2024-12-09 11:59:57.604212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.604219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.364 [2024-12-09 11:59:57.604223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.364 [2024-12-09 11:59:57.604229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126904 len:8 PRP1 0x0 PRP2 0x0 00:24:56.364 [2024-12-09 11:59:57.604237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.604245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.364 [2024-12-09 11:59:57.604251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.364 [2024-12-09 11:59:57.604255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126912 len:8 PRP1 0x0 PRP2 0x0 00:24:56.364 [2024-12-09 11:59:57.604260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.604291] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:56.364 [2024-12-09 11:59:57.604309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.364 [2024-12-09 11:59:57.604314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.604320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.364 [2024-12-09 11:59:57.604325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.364 [2024-12-09 11:59:57.604331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.364 [2024-12-09 11:59:57.604336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.365 [2024-12-09 11:59:57.604342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.365 [2024-12-09 11:59:57.604347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.365 [2024-12-09 11:59:57.604353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:56.365 [2024-12-09 11:59:57.604379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140b9d0 (9): Bad file descriptor 00:24:56.365 [2024-12-09 11:59:57.606806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:56.365 [2024-12-09 11:59:57.669931] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:24:56.365 12113.80 IOPS, 47.32 MiB/s [2024-12-09T11:00:04.251Z] 12191.00 IOPS, 47.62 MiB/s [2024-12-09T11:00:04.251Z] 12252.00 IOPS, 47.86 MiB/s [2024-12-09T11:00:04.251Z] 12291.69 IOPS, 48.01 MiB/s [2024-12-09T11:00:04.251Z] 12342.50 IOPS, 48.21 MiB/s 00:24:56.365 Latency(us) 00:24:56.365 [2024-12-09T11:00:04.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.365 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:56.365 Verification LBA range: start 0x0 length 0x4000 00:24:56.365 NVMe0n1 : 15.00 12378.53 48.35 551.65 0.00 9878.36 542.72 13817.17 00:24:56.365 [2024-12-09T11:00:04.251Z] =================================================================================================================== 00:24:56.365 [2024-12-09T11:00:04.251Z] Total : 12378.53 48.35 551.65 0.00 9878.36 542.72 13817.17 00:24:56.365 Received shutdown signal, test time was about 15.000000 seconds 00:24:56.365 00:24:56.365 Latency(us) 00:24:56.365 [2024-12-09T11:00:04.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.365 [2024-12-09T11:00:04.251Z] =================================================================================================================== 00:24:56.365 [2024-12-09T11:00:04.251Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:56.365 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:56.365 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:56.365 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:56.365 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=160436 00:24:56.365 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 160436 /var/tmp/bdevperf.sock 00:24:56.365 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:56.365 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 160436 ']' 00:24:56.365 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:56.365 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:56.365 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:56.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:56.365 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:56.365 12:00:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:56.934 12:00:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:56.934 12:00:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:56.934 12:00:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:56.934 [2024-12-09 12:00:04.805993] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:57.194 12:00:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:57.194 [2024-12-09 12:00:04.990417] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:57.195 12:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:57.454 NVMe0n1 00:24:57.455 12:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:57.714 00:24:57.714 12:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:58.284 00:24:58.284 12:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:58.284 12:00:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:58.285 12:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:58.545 12:00:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:01.842 12:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:01.842 12:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:01.842 12:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=161686 00:25:01.842 12:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:01.842 12:00:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 161686 00:25:02.783 { 00:25:02.783 "results": [ 00:25:02.783 { 00:25:02.783 "job": "NVMe0n1", 00:25:02.783 "core_mask": "0x1", 00:25:02.783 "workload": "verify", 00:25:02.783 "status": "finished", 00:25:02.783 "verify_range": { 00:25:02.783 "start": 0, 00:25:02.783 "length": 16384 00:25:02.783 }, 00:25:02.783 "queue_depth": 128, 00:25:02.783 "io_size": 4096, 00:25:02.783 "runtime": 1.003584, 00:25:02.783 "iops": 12553.010012116574, 00:25:02.783 "mibps": 49.03519535983037, 00:25:02.783 "io_failed": 0, 00:25:02.783 "io_timeout": 0, 00:25:02.783 "avg_latency_us": 10154.00908927343, 00:25:02.783 "min_latency_us": 928.4266666666666, 00:25:02.783 "max_latency_us": 12834.133333333333 00:25:02.783 } 00:25:02.783 ], 00:25:02.783 "core_count": 1 00:25:02.783 } 00:25:02.783 12:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:02.783 [2024-12-09 12:00:03.865050] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:25:02.783 [2024-12-09 12:00:03.865107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160436 ] 00:25:02.783 [2024-12-09 12:00:03.947090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.783 [2024-12-09 12:00:03.975374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.783 [2024-12-09 12:00:06.261578] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:02.783 [2024-12-09 12:00:06.261615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.783 [2024-12-09 12:00:06.261624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.783 [2024-12-09 12:00:06.261632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.783 [2024-12-09 12:00:06.261641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.783 [2024-12-09 12:00:06.261647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.783 [2024-12-09 12:00:06.261652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.783 [2024-12-09 12:00:06.261657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.783 [2024-12-09 12:00:06.261662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.783 [2024-12-09 12:00:06.261668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:02.783 [2024-12-09 12:00:06.261689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:02.783 [2024-12-09 12:00:06.261700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22739d0 (9): Bad file descriptor 00:25:02.783 [2024-12-09 12:00:06.275248] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:02.783 Running I/O for 1 seconds... 00:25:02.783 12461.00 IOPS, 48.68 MiB/s 00:25:02.783 Latency(us) 00:25:02.783 [2024-12-09T11:00:10.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.783 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:02.783 Verification LBA range: start 0x0 length 0x4000 00:25:02.783 NVMe0n1 : 1.00 12553.01 49.04 0.00 0.00 10154.01 928.43 12834.13 00:25:02.783 [2024-12-09T11:00:10.669Z] =================================================================================================================== 00:25:02.783 [2024-12-09T11:00:10.669Z] Total : 12553.01 49.04 0.00 0.00 10154.01 928.43 12834.13 00:25:02.783 12:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:02.783 12:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:03.044 12:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:03.337 12:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:03.337 12:00:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:03.337 12:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:03.635 12:00:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 160436 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 160436 ']' 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 160436 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 160436 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 160436' 00:25:07.063 killing process with pid 160436 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 160436 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 160436 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # sync 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # set +e 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # for i in {1..20} 00:25:07.063 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:25:07.063 rmmod nvme_tcp 00:25:07.063 rmmod nvme_fabrics 00:25:07.063 rmmod nvme_keyring 00:25:07.325 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:25:07.325 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # set -e 00:25:07.325 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@130 -- # return 0 00:25:07.325 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 156671 ']' 00:25:07.325 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 156671 00:25:07.325 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 156671 ']' 00:25:07.325 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 156671 00:25:07.325 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:07.325 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:07.325 12:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 156671 00:25:07.325 12:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:07.325 12:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:07.325 12:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 156671' 00:25:07.325 killing process with pid 156671 00:25:07.325 12:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 156671 00:25:07.325 12:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 156671 00:25:07.325 12:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:07.325 12:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:07.325 12:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:07.325 12:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # iptr 00:25:07.325 12:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:25:07.325 12:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:07.325 12:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:25:07.325 12:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:07.325 12:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # remove_spdk_ns 00:25:07.325 12:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.325 12:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.325 12:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:25:09.873 00:25:09.873 real 0m40.014s 00:25:09.873 user 2m3.136s 00:25:09.873 sys 0m8.673s 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:09.873 ************************************ 00:25:09.873 END TEST nvmf_failover 00:25:09.873 ************************************ 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.873 ************************************ 00:25:09.873 START TEST nvmf_host_discovery 00:25:09.873 ************************************ 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:09.873 * Looking for test storage... 00:25:09.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:09.873 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:09.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.874 --rc genhtml_branch_coverage=1 00:25:09.874 --rc genhtml_function_coverage=1 00:25:09.874 --rc genhtml_legend=1 00:25:09.874 --rc geninfo_all_blocks=1 00:25:09.874 --rc geninfo_unexecuted_blocks=1 00:25:09.874 00:25:09.874 ' 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:09.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.874 --rc genhtml_branch_coverage=1 00:25:09.874 --rc genhtml_function_coverage=1 00:25:09.874 --rc genhtml_legend=1 00:25:09.874 --rc geninfo_all_blocks=1 00:25:09.874 --rc geninfo_unexecuted_blocks=1 00:25:09.874 00:25:09.874 ' 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:09.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.874 --rc genhtml_branch_coverage=1 00:25:09.874 --rc genhtml_function_coverage=1 00:25:09.874 --rc genhtml_legend=1 00:25:09.874 --rc geninfo_all_blocks=1 00:25:09.874 --rc geninfo_unexecuted_blocks=1 00:25:09.874 00:25:09.874 ' 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:09.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.874 --rc genhtml_branch_coverage=1 00:25:09.874 --rc genhtml_function_coverage=1 00:25:09.874 --rc genhtml_legend=1 00:25:09.874 --rc geninfo_all_blocks=1 00:25:09.874 --rc geninfo_unexecuted_blocks=1 00:25:09.874 00:25:09.874 ' 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # : 0 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:25:09.874 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@56 -- # have_pci_nics=0 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # xtrace_disable 00:25:09.874 12:00:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_devs=() 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_devs 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_net_devs=() 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # pci_drivers=() 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # local -A pci_drivers 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # net_devs=() 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga net_devs 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # e810=() 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga e810 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # x722=() 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga x722 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@323 -- # mlx=() 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@323 -- # local -ga mlx 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:18.022 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:18.022 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:18.022 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:18.022 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:25:18.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:18.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:25:18.022 00:25:18.022 --- 10.0.0.2 ping statistics --- 00:25:18.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.022 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:25:18.022 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:18.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:18.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:25:18.023 00:25:18.023 --- 10.0.0.1 ping statistics --- 00:25:18.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.023 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:25:18.023 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:18.023 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # return 0 00:25:18.023 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:18.023 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.023 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:18.023 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:18.023 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.023 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:18.023 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:18.023 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:18.023 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:18.023 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:18.023 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.023 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=167126 00:25:18.023 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 167126 00:25:18.023 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:18.023 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 167126 ']' 00:25:18.023 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.023 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:18.023 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.023 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:18.023 12:00:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.023 [2024-12-09 12:00:24.868314] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:25:18.023 [2024-12-09 12:00:24.868382] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.023 [2024-12-09 12:00:24.966487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.023 [2024-12-09 12:00:25.016727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.023 [2024-12-09 12:00:25.016783] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.023 [2024-12-09 12:00:25.016791] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.023 [2024-12-09 12:00:25.016798] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.023 [2024-12-09 12:00:25.016805] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.023 [2024-12-09 12:00:25.017556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.023 [2024-12-09 12:00:25.729411] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.023 [2024-12-09 12:00:25.737737] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.023 null0 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.023 null1 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=167337 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 167337 /tmp/host.sock 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 167337 ']' 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:18.023 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:18.023 12:00:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.023 [2024-12-09 12:00:25.824815] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:25:18.023 [2024-12-09 12:00:25.824883] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167337 ] 00:25:18.285 [2024-12-09 12:00:25.915598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.285 [2024-12-09 12:00:25.968470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.858 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.858 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:18.858 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:18.858 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:18.858 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.858 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.858 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.858 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:18.858 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.859 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.859 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.859 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:18.859 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:18.859 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:18.859 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:18.859 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.859 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:18.859 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:18.859 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.859 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.859 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:18.859 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:18.859 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.859 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:18.859 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.859 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.859 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:18.859 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:18.859 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.121 [2024-12-09 12:00:26.980894] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:19.121 12:00:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:19.121 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:19.383 12:00:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:19.956 [2024-12-09 12:00:27.698862] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:19.956 [2024-12-09 12:00:27.698894] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:19.956 [2024-12-09 12:00:27.698909] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:19.956 [2024-12-09 12:00:27.787174] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:20.218 [2024-12-09 12:00:27.846115] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:20.218 [2024-12-09 12:00:27.847493] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1e45320:1 started. 00:25:20.218 [2024-12-09 12:00:27.849469] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:20.218 [2024-12-09 12:00:27.849499] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:20.218 [2024-12-09 12:00:27.857221] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1e45320 was disconnected and freed. delete nvme_qpair. 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:20.480 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:20.481 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:20.481 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:20.481 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.481 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.742 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.742 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:20.742 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:20.742 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:20.742 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:20.742 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:20.742 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.742 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.742 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.742 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:20.742 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:20.742 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:20.742 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:20.742 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:20.742 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:20.742 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.742 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:20.742 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.742 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:20.742 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.742 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:20.742 [2024-12-09 12:00:28.618892] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1e456a0:1 started. 00:25:21.005 [2024-12-09 12:00:28.628548] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1e456a0 was disconnected and freed. delete nvme_qpair. 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.005 [2024-12-09 12:00:28.705817] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:21.005 [2024-12-09 12:00:28.706807] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:21.005 [2024-12-09 12:00:28.706837] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.005 [2024-12-09 12:00:28.836652] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:21.005 12:00:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:21.266 [2024-12-09 12:00:29.107352] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:21.266 [2024-12-09 12:00:29.107396] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:21.266 [2024-12-09 12:00:29.107406] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:21.266 [2024-12-09 12:00:29.107412] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.209 [2024-12-09 12:00:29.981341] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:22.209 [2024-12-09 12:00:29.981359] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:22.209 [2024-12-09 12:00:29.983718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.209 [2024-12-09 12:00:29.983732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.209 [2024-12-09 12:00:29.983739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.209 [2024-12-09 12:00:29.983744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.209 [2024-12-09 12:00:29.983750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.209 [2024-12-09 12:00:29.983755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.209 [2024-12-09 12:00:29.983761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.209 [2024-12-09 12:00:29.983766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.209 [2024-12-09 12:00:29.983773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e17470 is same with the state(6) to be set 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.209 12:00:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:22.209 [2024-12-09 12:00:29.993733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e17470 (9): Bad file descriptor 00:25:22.209 [2024-12-09 12:00:30.003767] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:22.209 [2024-12-09 12:00:30.003778] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:22.209 [2024-12-09 12:00:30.003783] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:22.209 [2024-12-09 12:00:30.003787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:22.209 [2024-12-09 12:00:30.003801] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:22.209 [2024-12-09 12:00:30.004119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.209 [2024-12-09 12:00:30.004129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e17470 with addr=10.0.0.2, port=4420 00:25:22.209 [2024-12-09 12:00:30.004135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e17470 is same with the state(6) to be set 00:25:22.210 [2024-12-09 12:00:30.004144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e17470 (9): Bad file descriptor 00:25:22.210 [2024-12-09 12:00:30.004152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:22.210 [2024-12-09 12:00:30.004157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:22.210 [2024-12-09 12:00:30.004163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:22.210 [2024-12-09 12:00:30.004168] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:22.210 [2024-12-09 12:00:30.004172] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:22.210 [2024-12-09 12:00:30.004175] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:22.210 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.210 [2024-12-09 12:00:30.013829] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:22.210 [2024-12-09 12:00:30.013837] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:22.210 [2024-12-09 12:00:30.013841] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:22.210 [2024-12-09 12:00:30.013844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:22.210 [2024-12-09 12:00:30.013854] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:22.210 [2024-12-09 12:00:30.014140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.210 [2024-12-09 12:00:30.014148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e17470 with addr=10.0.0.2, port=4420 00:25:22.210 [2024-12-09 12:00:30.014154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e17470 is same with the state(6) to be set 00:25:22.210 [2024-12-09 12:00:30.014161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e17470 (9): Bad file descriptor 00:25:22.210 [2024-12-09 12:00:30.014172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:22.210 [2024-12-09 12:00:30.014177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:22.210 [2024-12-09 12:00:30.014182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:22.210 [2024-12-09 12:00:30.014186] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:22.210 [2024-12-09 12:00:30.014190] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:22.210 [2024-12-09 12:00:30.014193] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:22.210 [2024-12-09 12:00:30.023883] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:22.210 [2024-12-09 12:00:30.023891] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:22.210 [2024-12-09 12:00:30.023894] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:22.210 [2024-12-09 12:00:30.023898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:22.210 [2024-12-09 12:00:30.023908] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:22.210 [2024-12-09 12:00:30.024190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.210 [2024-12-09 12:00:30.024199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e17470 with addr=10.0.0.2, port=4420 00:25:22.210 [2024-12-09 12:00:30.024204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e17470 is same with the state(6) to be set 00:25:22.210 [2024-12-09 12:00:30.024212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e17470 (9): Bad file descriptor 00:25:22.210 [2024-12-09 12:00:30.024219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:22.210 [2024-12-09 12:00:30.024224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:22.210 [2024-12-09 12:00:30.024229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:22.210 [2024-12-09 12:00:30.024234] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:22.210 [2024-12-09 12:00:30.024237] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:22.210 [2024-12-09 12:00:30.024240] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:22.210 [2024-12-09 12:00:30.033937] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:22.210 [2024-12-09 12:00:30.033947] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:22.210 [2024-12-09 12:00:30.033950] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:22.210 [2024-12-09 12:00:30.033953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:22.210 [2024-12-09 12:00:30.033964] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:22.210 [2024-12-09 12:00:30.034135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.210 [2024-12-09 12:00:30.034145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e17470 with addr=10.0.0.2, port=4420 00:25:22.210 [2024-12-09 12:00:30.034150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e17470 is same with the state(6) to be set 00:25:22.210 [2024-12-09 12:00:30.034165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e17470 (9): Bad file descriptor 00:25:22.210 [2024-12-09 12:00:30.034172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:22.210 [2024-12-09 12:00:30.034177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:22.210 [2024-12-09 12:00:30.034183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:22.210 [2024-12-09 12:00:30.034188] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:22.210 [2024-12-09 12:00:30.034191] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:22.210 [2024-12-09 12:00:30.034194] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:22.210 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.210 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:22.210 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:22.210 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:22.210 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:22.210 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:22.210 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:22.210 [2024-12-09 12:00:30.043993] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:22.210 [2024-12-09 12:00:30.044002] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:22.210 [2024-12-09 12:00:30.044006] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:22.210 [2024-12-09 12:00:30.044009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:22.210 [2024-12-09 12:00:30.044019] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:22.210 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:22.210 [2024-12-09 12:00:30.044296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.210 [2024-12-09 12:00:30.044305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e17470 with addr=10.0.0.2, port=4420 00:25:22.210 [2024-12-09 12:00:30.044310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e17470 is same with the state(6) to be set 00:25:22.210 [2024-12-09 12:00:30.044318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e17470 (9): Bad file descriptor 00:25:22.210 [2024-12-09 12:00:30.044325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:22.210 [2024-12-09 12:00:30.044330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:22.210 [2024-12-09 12:00:30.044335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:22.210 [2024-12-09 12:00:30.044339] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:22.210 [2024-12-09 12:00:30.044342] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:22.210 [2024-12-09 12:00:30.044346] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:22.210 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:22.210 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.210 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:22.210 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.210 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:22.210 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.210 [2024-12-09 12:00:30.054048] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:22.210 [2024-12-09 12:00:30.054058] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:22.210 [2024-12-09 12:00:30.054062] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:22.210 [2024-12-09 12:00:30.054065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:22.210 [2024-12-09 12:00:30.054075] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:22.210 [2024-12-09 12:00:30.054280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.210 [2024-12-09 12:00:30.054290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e17470 with addr=10.0.0.2, port=4420 00:25:22.210 [2024-12-09 12:00:30.054296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e17470 is same with the state(6) to be set 00:25:22.210 [2024-12-09 12:00:30.054304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e17470 (9): Bad file descriptor 00:25:22.210 [2024-12-09 12:00:30.054313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:22.210 [2024-12-09 12:00:30.054318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:22.210 [2024-12-09 12:00:30.054324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:22.211 [2024-12-09 12:00:30.054328] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:22.211 [2024-12-09 12:00:30.054332] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:22.211 [2024-12-09 12:00:30.054335] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:22.211 [2024-12-09 12:00:30.064105] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:22.211 [2024-12-09 12:00:30.064113] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:22.211 [2024-12-09 12:00:30.064117] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:22.211 [2024-12-09 12:00:30.064120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:22.211 [2024-12-09 12:00:30.064130] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:22.211 [2024-12-09 12:00:30.064308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.211 [2024-12-09 12:00:30.064317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e17470 with addr=10.0.0.2, port=4420 00:25:22.211 [2024-12-09 12:00:30.064322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e17470 is same with the state(6) to be set 00:25:22.211 [2024-12-09 12:00:30.064330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e17470 (9): Bad file descriptor 00:25:22.211 [2024-12-09 12:00:30.064337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:22.211 [2024-12-09 12:00:30.064341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:22.211 [2024-12-09 12:00:30.064349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:22.211 [2024-12-09 12:00:30.064353] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:22.211 [2024-12-09 12:00:30.064357] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:22.211 [2024-12-09 12:00:30.064360] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:22.211 [2024-12-09 12:00:30.069596] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:22.211 [2024-12-09 12:00:30.069609] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:22.211 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.211 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:22.211 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:22.211 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:22.211 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:22.211 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:22.211 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:22.211 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:22.211 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:22.211 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:22.472 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:22.472 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.472 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:22.472 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.473 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.734 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:22.734 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:22.734 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:22.734 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:22.734 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:22.734 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.734 12:00:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.675 [2024-12-09 12:00:31.387673] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:23.675 [2024-12-09 12:00:31.387687] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:23.675 [2024-12-09 12:00:31.387696] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:23.675 [2024-12-09 12:00:31.517071] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:23.937 [2024-12-09 12:00:31.785340] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:23.937 [2024-12-09 12:00:31.785989] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1e4b3f0:1 started. 00:25:23.937 [2024-12-09 12:00:31.787390] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:23.937 [2024-12-09 12:00:31.787411] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:23.937 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.937 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:23.937 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:23.937 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:23.937 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:23.937 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:23.937 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:23.937 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:23.937 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:23.937 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.937 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.937 [2024-12-09 12:00:31.796498] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1e4b3f0 was disconnected and freed. delete nvme_qpair. 00:25:23.937 request: 00:25:23.937 { 00:25:23.937 "name": "nvme", 00:25:23.937 "trtype": "tcp", 00:25:23.937 "traddr": "10.0.0.2", 00:25:23.937 "adrfam": "ipv4", 00:25:23.937 "trsvcid": "8009", 00:25:23.937 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:23.937 "wait_for_attach": true, 00:25:23.937 "method": "bdev_nvme_start_discovery", 00:25:23.937 "req_id": 1 00:25:23.937 } 00:25:23.937 Got JSON-RPC error response 00:25:23.937 response: 00:25:23.937 { 00:25:23.937 "code": -17, 00:25:23.937 "message": "File exists" 00:25:23.937 } 00:25:23.937 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:23.937 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:23.937 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:23.937 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:23.937 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:23.937 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:23.937 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:23.938 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:23.938 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.938 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:23.938 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.938 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.199 request: 00:25:24.199 { 00:25:24.199 "name": "nvme_second", 00:25:24.199 "trtype": "tcp", 00:25:24.199 "traddr": "10.0.0.2", 00:25:24.199 "adrfam": "ipv4", 00:25:24.199 "trsvcid": "8009", 00:25:24.199 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:24.199 "wait_for_attach": true, 00:25:24.199 "method": "bdev_nvme_start_discovery", 00:25:24.199 "req_id": 1 00:25:24.199 } 00:25:24.199 Got JSON-RPC error response 00:25:24.199 response: 00:25:24.199 { 00:25:24.199 "code": -17, 00:25:24.199 "message": "File exists" 00:25:24.199 } 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.199 12:00:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.199 12:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.199 12:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:24.199 12:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:24.199 12:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:24.199 12:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:24.199 12:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:24.199 12:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:24.199 12:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:24.199 12:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:24.199 12:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:24.199 12:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.199 12:00:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.583 [2024-12-09 12:00:33.039096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:25.583 [2024-12-09 12:00:33.039120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e2f6e0 with addr=10.0.0.2, port=8010 00:25:25.583 [2024-12-09 12:00:33.039131] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:25.583 [2024-12-09 12:00:33.039136] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:25.583 [2024-12-09 12:00:33.039141] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:26.523 [2024-12-09 12:00:34.041533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:26.523 [2024-12-09 12:00:34.041553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e2f6e0 with addr=10.0.0.2, port=8010 00:25:26.523 [2024-12-09 12:00:34.041561] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:26.523 [2024-12-09 12:00:34.041566] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:26.523 [2024-12-09 12:00:34.041571] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:27.463 [2024-12-09 12:00:35.043554] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:27.463 request: 00:25:27.463 { 00:25:27.463 "name": "nvme_second", 00:25:27.463 "trtype": "tcp", 00:25:27.463 "traddr": "10.0.0.2", 00:25:27.463 "adrfam": "ipv4", 00:25:27.463 "trsvcid": "8010", 00:25:27.463 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:27.463 "wait_for_attach": false, 00:25:27.463 "attach_timeout_ms": 3000, 00:25:27.463 "method": "bdev_nvme_start_discovery", 00:25:27.463 "req_id": 1 00:25:27.463 } 00:25:27.463 Got JSON-RPC error response 00:25:27.463 response: 00:25:27.463 { 00:25:27.463 "code": -110, 00:25:27.463 "message": "Connection timed out" 00:25:27.463 } 00:25:27.463 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:27.463 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 167337 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # sync 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # set +e 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # for i in {1..20} 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:25:27.464 rmmod nvme_tcp 00:25:27.464 rmmod nvme_fabrics 00:25:27.464 rmmod nvme_keyring 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # set -e 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@130 -- # return 0 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 167126 ']' 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 167126 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 167126 ']' 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 167126 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 167126 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 167126' 00:25:27.464 killing process with pid 167126 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 167126 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 167126 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # iptr 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:25:27.464 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:27.724 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # remove_spdk_ns 00:25:27.724 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.724 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:27.724 12:00:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.636 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:25:29.636 00:25:29.636 real 0m20.106s 00:25:29.636 user 0m23.507s 00:25:29.636 sys 0m7.114s 00:25:29.636 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:29.636 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.636 ************************************ 00:25:29.636 END TEST nvmf_host_discovery 00:25:29.636 ************************************ 00:25:29.636 12:00:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:29.636 12:00:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:29.636 12:00:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:29.636 12:00:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.636 ************************************ 00:25:29.636 START TEST nvmf_host_multipath_status 00:25:29.636 ************************************ 00:25:29.636 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:29.898 * Looking for test storage... 00:25:29.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:29.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.898 --rc genhtml_branch_coverage=1 00:25:29.898 --rc genhtml_function_coverage=1 00:25:29.898 --rc genhtml_legend=1 00:25:29.898 --rc geninfo_all_blocks=1 00:25:29.898 --rc geninfo_unexecuted_blocks=1 00:25:29.898 00:25:29.898 ' 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:29.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.898 --rc genhtml_branch_coverage=1 00:25:29.898 --rc genhtml_function_coverage=1 00:25:29.898 --rc genhtml_legend=1 00:25:29.898 --rc geninfo_all_blocks=1 00:25:29.898 --rc geninfo_unexecuted_blocks=1 00:25:29.898 00:25:29.898 ' 00:25:29.898 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:29.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.898 --rc genhtml_branch_coverage=1 00:25:29.898 --rc genhtml_function_coverage=1 00:25:29.898 --rc genhtml_legend=1 00:25:29.898 --rc geninfo_all_blocks=1 00:25:29.899 --rc geninfo_unexecuted_blocks=1 00:25:29.899 00:25:29.899 ' 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:29.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.899 --rc genhtml_branch_coverage=1 00:25:29.899 --rc genhtml_function_coverage=1 00:25:29.899 --rc genhtml_legend=1 00:25:29.899 --rc geninfo_all_blocks=1 00:25:29.899 --rc geninfo_unexecuted_blocks=1 00:25:29.899 00:25:29.899 ' 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # : 0 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:25:29.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@56 -- # have_pci_nics=0 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # xtrace_disable 00:25:29.899 12:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:38.045 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:38.045 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_devs=() 00:25:38.045 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_devs 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_net_devs=() 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # pci_drivers=() 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # local -A pci_drivers 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # net_devs=() 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga net_devs 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # e810=() 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga e810 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # x722=() 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga x722 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # mlx=() 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # local -ga mlx 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:38.046 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:38.046 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:38.046 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:38.046 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # is_hw=yes 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.046 12:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.046 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.046 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.046 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:25:38.046 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:38.046 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:38.046 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:38.046 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:38.046 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:25:38.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:25:38.046 00:25:38.046 --- 10.0.0.2 ping statistics --- 00:25:38.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.046 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:25:38.046 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:38.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:25:38.046 00:25:38.046 --- 10.0.0.1 ping statistics --- 00:25:38.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.046 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:25:38.046 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.046 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # return 0 00:25:38.047 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:38.047 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.047 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:38.047 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:38.047 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.047 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:38.047 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:38.047 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:38.047 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:38.047 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:38.047 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:38.047 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=173513 00:25:38.047 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 173513 00:25:38.047 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:38.047 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 173513 ']' 00:25:38.047 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.047 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:38.047 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.047 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:38.047 12:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:38.047 [2024-12-09 12:00:45.297687] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:25:38.047 [2024-12-09 12:00:45.297753] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.047 [2024-12-09 12:00:45.395631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:38.047 [2024-12-09 12:00:45.446931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.047 [2024-12-09 12:00:45.446988] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.047 [2024-12-09 12:00:45.446997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:38.047 [2024-12-09 12:00:45.447005] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:38.047 [2024-12-09 12:00:45.447011] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.047 [2024-12-09 12:00:45.448711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.047 [2024-12-09 12:00:45.448756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.309 12:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:38.309 12:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:38.309 12:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:38.309 12:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:38.309 12:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:38.309 12:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.309 12:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=173513 00:25:38.309 12:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:38.571 [2024-12-09 12:00:46.325361] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:38.571 12:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:38.832 Malloc0 00:25:38.832 12:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:39.094 12:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:39.094 12:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:39.355 [2024-12-09 12:00:47.104989] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:39.355 12:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:39.616 [2024-12-09 12:00:47.289465] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:39.616 12:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=173877 00:25:39.616 12:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:39.616 12:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:39.616 12:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 173877 /var/tmp/bdevperf.sock 00:25:39.616 12:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 173877 ']' 00:25:39.616 12:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:39.616 12:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:39.616 12:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:39.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:39.616 12:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:39.616 12:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:40.559 12:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:40.559 12:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:40.559 12:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:40.559 12:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:40.822 Nvme0n1 00:25:40.822 12:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:41.393 Nvme0n1 00:25:41.393 12:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:41.393 12:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:43.306 12:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:43.306 12:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:43.567 12:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:43.567 12:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:44.953 12:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:44.953 12:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:44.953 12:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.953 12:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:44.953 12:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.953 12:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:44.953 12:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.953 12:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:44.953 12:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:44.953 12:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:44.953 12:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.953 12:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:45.214 12:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.214 12:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:45.214 12:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.214 12:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:45.475 12:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.475 12:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:45.475 12:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.475 12:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:45.475 12:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.475 12:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:45.475 12:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.475 12:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:45.736 12:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.736 12:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:45.736 12:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:45.997 12:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:45.997 12:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:47.382 12:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:47.382 12:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:47.382 12:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.382 12:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:47.382 12:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:47.382 12:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:47.382 12:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.382 12:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:47.382 12:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.382 12:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:47.382 12:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.382 12:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:47.643 12:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.643 12:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:47.643 12:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.643 12:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:47.903 12:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.903 12:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:47.903 12:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:47.903 12:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.163 12:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.163 12:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:48.163 12:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.163 12:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:48.163 12:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.163 12:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:48.163 12:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:48.424 12:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:48.685 12:00:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:49.627 12:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:49.627 12:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:49.627 12:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.627 12:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:49.888 12:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.888 12:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:49.888 12:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.888 12:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:49.888 12:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:49.888 12:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:49.888 12:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.888 12:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:50.148 12:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.148 12:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:50.148 12:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.148 12:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:50.410 12:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.410 12:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:50.410 12:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.410 12:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:50.410 12:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.410 12:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:50.410 12:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.410 12:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:50.671 12:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.671 12:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:50.671 12:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:50.932 12:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:50.932 12:00:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:52.317 12:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:52.317 12:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:52.317 12:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.317 12:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:52.317 12:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.317 12:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:52.317 12:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:52.317 12:00:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.317 12:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:52.317 12:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:52.317 12:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.317 12:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:52.578 12:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.578 12:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:52.578 12:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.578 12:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:52.839 12:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.839 12:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:52.839 12:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.839 12:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:52.839 12:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.839 12:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:52.839 12:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.839 12:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:53.100 12:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.100 12:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:53.100 12:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:53.359 12:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:53.618 12:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:54.561 12:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:54.561 12:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:54.561 12:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.561 12:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:54.822 12:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:54.822 12:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:54.822 12:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.822 12:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:54.822 12:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:54.822 12:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:54.822 12:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.822 12:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:55.082 12:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.082 12:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:55.082 12:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.082 12:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:55.343 12:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.343 12:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:55.343 12:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.343 12:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:55.343 12:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:55.343 12:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:55.343 12:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.343 12:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:55.604 12:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:55.604 12:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:55.604 12:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:55.865 12:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:55.865 12:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:57.249 12:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:57.249 12:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:57.249 12:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.249 12:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:57.249 12:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:57.249 12:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:57.249 12:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.249 12:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:57.249 12:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.249 12:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:57.249 12:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.249 12:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:57.511 12:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.511 12:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:57.511 12:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.511 12:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:57.772 12:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.772 12:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:57.772 12:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.772 12:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:57.772 12:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:57.772 12:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:57.772 12:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.773 12:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:58.033 12:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.033 12:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:58.294 12:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:58.294 12:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:58.555 12:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:58.555 12:01:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:59.496 12:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:59.497 12:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:59.497 12:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.497 12:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:59.757 12:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.757 12:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:59.757 12:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.757 12:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:00.018 12:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.018 12:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:00.018 12:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.018 12:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:00.279 12:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.279 12:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:00.279 12:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.279 12:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:00.279 12:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.279 12:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:00.279 12:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:00.279 12:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.540 12:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.540 12:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:00.540 12:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.540 12:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:00.800 12:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.800 12:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:00.800 12:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:00.800 12:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:01.061 12:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:02.004 12:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:02.004 12:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:02.004 12:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:02.004 12:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.264 12:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:02.264 12:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:02.264 12:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.264 12:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:02.525 12:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.525 12:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:02.525 12:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.525 12:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:02.525 12:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.525 12:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:02.525 12:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.525 12:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:02.786 12:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.786 12:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:02.786 12:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:02.786 12:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.047 12:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.047 12:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:03.047 12:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:03.047 12:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.047 12:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.047 12:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:03.047 12:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:03.308 12:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:03.569 12:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:04.511 12:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:04.511 12:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:04.511 12:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.511 12:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:04.772 12:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.772 12:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:04.772 12:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.772 12:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:05.033 12:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.033 12:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:05.033 12:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.033 12:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:05.033 12:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.033 12:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:05.033 12:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.034 12:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:05.295 12:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.295 12:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:05.295 12:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.295 12:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:05.555 12:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.556 12:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:05.556 12:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.556 12:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:05.556 12:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.556 12:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:05.556 12:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:05.817 12:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:06.077 12:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:07.019 12:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:07.019 12:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:07.019 12:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.019 12:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:07.283 12:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.283 12:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:07.283 12:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.283 12:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:07.545 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.545 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:07.545 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.545 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:07.545 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.545 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:07.545 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.545 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:07.806 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.806 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:07.806 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.806 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:08.068 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.068 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:08.068 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.068 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:08.068 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:08.068 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 173877 00:26:08.068 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 173877 ']' 00:26:08.068 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 173877 00:26:08.068 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:08.068 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:08.068 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 173877 00:26:08.351 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:08.351 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:08.351 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 173877' 00:26:08.351 killing process with pid 173877 00:26:08.351 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 173877 00:26:08.351 12:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 173877 00:26:08.351 { 00:26:08.351 "results": [ 00:26:08.351 { 00:26:08.351 "job": "Nvme0n1", 00:26:08.351 "core_mask": "0x4", 00:26:08.351 "workload": "verify", 00:26:08.351 "status": "terminated", 00:26:08.351 "verify_range": { 00:26:08.351 "start": 0, 00:26:08.351 "length": 16384 00:26:08.351 }, 00:26:08.351 "queue_depth": 128, 00:26:08.351 "io_size": 4096, 00:26:08.351 "runtime": 26.789314, 00:26:08.351 "iops": 12120.131183650317, 00:26:08.351 "mibps": 47.34426243613405, 00:26:08.351 "io_failed": 0, 00:26:08.351 "io_timeout": 0, 00:26:08.351 "avg_latency_us": 10542.910657673474, 00:26:08.351 "min_latency_us": 580.2666666666667, 00:26:08.351 "max_latency_us": 3075822.933333333 00:26:08.351 } 00:26:08.351 ], 00:26:08.351 "core_count": 1 00:26:08.351 } 00:26:08.351 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 173877 00:26:08.351 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:08.351 [2024-12-09 12:00:47.365214] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:26:08.351 [2024-12-09 12:00:47.365301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173877 ] 00:26:08.351 [2024-12-09 12:00:47.431238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.351 [2024-12-09 12:00:47.467899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:08.351 Running I/O for 90 seconds... 00:26:08.351 11034.00 IOPS, 43.10 MiB/s [2024-12-09T11:01:16.237Z] 11981.50 IOPS, 46.80 MiB/s [2024-12-09T11:01:16.237Z] 12226.00 IOPS, 47.76 MiB/s [2024-12-09T11:01:16.237Z] 12382.25 IOPS, 48.37 MiB/s [2024-12-09T11:01:16.237Z] 12509.40 IOPS, 48.86 MiB/s [2024-12-09T11:01:16.237Z] 12579.67 IOPS, 49.14 MiB/s [2024-12-09T11:01:16.237Z] 12627.71 IOPS, 49.33 MiB/s [2024-12-09T11:01:16.237Z] 12653.75 IOPS, 49.43 MiB/s [2024-12-09T11:01:16.237Z] 12693.33 IOPS, 49.58 MiB/s [2024-12-09T11:01:16.237Z] 12716.50 IOPS, 49.67 MiB/s [2024-12-09T11:01:16.237Z] 12738.36 IOPS, 49.76 MiB/s [2024-12-09T11:01:16.237Z] [2024-12-09 12:01:01.048323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.351 [2024-12-09 12:01:01.048356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:08.351 [2024-12-09 12:01:01.048373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.351 [2024-12-09 12:01:01.048379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:08.351 [2024-12-09 12:01:01.048390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.351 [2024-12-09 12:01:01.048396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:08.351 [2024-12-09 12:01:01.048406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.351 [2024-12-09 12:01:01.048411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:08.351 [2024-12-09 12:01:01.048422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.351 [2024-12-09 12:01:01.048427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:08.351 [2024-12-09 12:01:01.048438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.351 [2024-12-09 12:01:01.048443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:08.351 [2024-12-09 12:01:01.048453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.351 [2024-12-09 12:01:01.048459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:08.351 [2024-12-09 12:01:01.048469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.351 [2024-12-09 12:01:01.048474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:08.351 [2024-12-09 12:01:01.048484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.351 [2024-12-09 12:01:01.048490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:08.351 [2024-12-09 12:01:01.048500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.351 [2024-12-09 12:01:01.048510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:08.351 [2024-12-09 12:01:01.048521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.351 [2024-12-09 12:01:01.048526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:08.351 [2024-12-09 12:01:01.048536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.351 [2024-12-09 12:01:01.048542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:08.351 [2024-12-09 12:01:01.048552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.351 [2024-12-09 12:01:01.048557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:08.351 [2024-12-09 12:01:01.048567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.351 [2024-12-09 12:01:01.048573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:08.351 [2024-12-09 12:01:01.048583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.351 [2024-12-09 12:01:01.048588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:08.351 [2024-12-09 12:01:01.048599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.351 [2024-12-09 12:01:01.048605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.048615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.048621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.048633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.048644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.048655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.048662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.048673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.048680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.048691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.048697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.048710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.048717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.048730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.048736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.048747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.048753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.048763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.352 [2024-12-09 12:01:01.048769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.048780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.048786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.048797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.048802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.048812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.048818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.048828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.048834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.048845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.048850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.048860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.048865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.048876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.048881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.049126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.049134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.049146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.049152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.049164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.049169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.049179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.049185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.049195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.049200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.049211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.049216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.049227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.049232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.049242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.049247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.049257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.049262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.049272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.049278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.049288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.049295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.049305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.049311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.049322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.049328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.049338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.049343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.049355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.049361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.049372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.049377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.049387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.049392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.049403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.049408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.049418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.049424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.049435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.049440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.049450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.352 [2024-12-09 12:01:01.049456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:08.352 [2024-12-09 12:01:01.049467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.049473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.049483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.049489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.049499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.049504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.049515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.049520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.049532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.049537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.049547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.049554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.049564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.049571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.049581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.049586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.049596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.049603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.049614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.049619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.049630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.049635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.049886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.049894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.049906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.049912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.049922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.049928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.049938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.049943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.049954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.049959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.049970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.049975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.049986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.049991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.050006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.050011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.050177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.050184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.050195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.353 [2024-12-09 12:01:01.050200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.050211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.353 [2024-12-09 12:01:01.050216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.050226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.353 [2024-12-09 12:01:01.050232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.050242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.353 [2024-12-09 12:01:01.050247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.050258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.353 [2024-12-09 12:01:01.050263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.050273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.353 [2024-12-09 12:01:01.050279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.050289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.353 [2024-12-09 12:01:01.050294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.050304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.353 [2024-12-09 12:01:01.050309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.050319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.353 [2024-12-09 12:01:01.050324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.050335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.353 [2024-12-09 12:01:01.050340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.050352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.353 [2024-12-09 12:01:01.050357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.050367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.353 [2024-12-09 12:01:01.050372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.050382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.353 [2024-12-09 12:01:01.050388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.050398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.353 [2024-12-09 12:01:01.050403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.050413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.353 [2024-12-09 12:01:01.050418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.050429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.050434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.050553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.050561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.050572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.050577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:08.353 [2024-12-09 12:01:01.050587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.353 [2024-12-09 12:01:01.050593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.354 [2024-12-09 12:01:01.050608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.354 [2024-12-09 12:01:01.050623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.050644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.354 [2024-12-09 12:01:01.050661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.050677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.050694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.050709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.050725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.050741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.050757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.050772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.050788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.050803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.050819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.050835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.050852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.050868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.050883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.050898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.050914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.050929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.050945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.050960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.050976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.050986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.050991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.051002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.051007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.051017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.051022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.051032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.051039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.051049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.051054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.051065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.051070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.051080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.051085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.051096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.051101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.051112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.051117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.051127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.051133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.051143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.051148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.051394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.354 [2024-12-09 12:01:01.051401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.051412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.354 [2024-12-09 12:01:01.051417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:08.354 [2024-12-09 12:01:01.051428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.355 [2024-12-09 12:01:01.051796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.051901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.051906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.052165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.052174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:08.355 [2024-12-09 12:01:01.052188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.355 [2024-12-09 12:01:01.052193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.052203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.052208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.052218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.052224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.052234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.052242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.052252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.052257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.052267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.052273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.052283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.052288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.063732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.063738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.064045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.064055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.064071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.064077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.064087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.064092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.064102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.064108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.064118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.064124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.064134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.064140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.064150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.064155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.064165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.064171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.064181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.356 [2024-12-09 12:01:01.064186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:08.356 [2024-12-09 12:01:01.064197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.357 [2024-12-09 12:01:01.064436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.357 [2024-12-09 12:01:01.064453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.357 [2024-12-09 12:01:01.064468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.357 [2024-12-09 12:01:01.064483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.357 [2024-12-09 12:01:01.064499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.357 [2024-12-09 12:01:01.064514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.357 [2024-12-09 12:01:01.064545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.357 [2024-12-09 12:01:01.064817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:08.357 [2024-12-09 12:01:01.064828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.358 [2024-12-09 12:01:01.064833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.064846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.358 [2024-12-09 12:01:01.064851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.064861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.358 [2024-12-09 12:01:01.064866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.064877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.358 [2024-12-09 12:01:01.064882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.064892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.358 [2024-12-09 12:01:01.064897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.064908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.358 [2024-12-09 12:01:01.064913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.064923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.358 [2024-12-09 12:01:01.064928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.064939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.358 [2024-12-09 12:01:01.064944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.064954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.358 [2024-12-09 12:01:01.064959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.064970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.358 [2024-12-09 12:01:01.064975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.064985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.358 [2024-12-09 12:01:01.064991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.358 [2024-12-09 12:01:01.065006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.358 [2024-12-09 12:01:01.065021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.358 [2024-12-09 12:01:01.065040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.358 [2024-12-09 12:01:01.065056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.358 [2024-12-09 12:01:01.065431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:08.358 [2024-12-09 12:01:01.065442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.358 [2024-12-09 12:01:01.065447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.065457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.065462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.065472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.065478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.065488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.065493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.065503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.065509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.065519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.065524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.065535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.065540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.066586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.066596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.074632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.074679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.074686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.074697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.074706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.074716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.074721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.074732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.074737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.074748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.074753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.075058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.075069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.359 [2024-12-09 12:01:01.075082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.359 [2024-12-09 12:01:01.075087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.360 [2024-12-09 12:01:01.075103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.360 [2024-12-09 12:01:01.075118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.360 [2024-12-09 12:01:01.075134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.360 [2024-12-09 12:01:01.075149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.360 [2024-12-09 12:01:01.075165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.360 [2024-12-09 12:01:01.075180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.360 [2024-12-09 12:01:01.075196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.360 [2024-12-09 12:01:01.075214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.360 [2024-12-09 12:01:01.075229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.360 [2024-12-09 12:01:01.075245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.360 [2024-12-09 12:01:01.075261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.360 [2024-12-09 12:01:01.075276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.360 [2024-12-09 12:01:01.075292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.360 [2024-12-09 12:01:01.075307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.360 [2024-12-09 12:01:01.075323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.360 [2024-12-09 12:01:01.075338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.360 [2024-12-09 12:01:01.075354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.360 [2024-12-09 12:01:01.075369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.360 [2024-12-09 12:01:01.075385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.360 [2024-12-09 12:01:01.075402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.360 [2024-12-09 12:01:01.075417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.360 [2024-12-09 12:01:01.075433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.360 [2024-12-09 12:01:01.075448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.360 [2024-12-09 12:01:01.075464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.360 [2024-12-09 12:01:01.075479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.360 [2024-12-09 12:01:01.075495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.360 [2024-12-09 12:01:01.075511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.360 [2024-12-09 12:01:01.075527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.360 [2024-12-09 12:01:01.075542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.360 [2024-12-09 12:01:01.075557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.360 [2024-12-09 12:01:01.075573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.360 [2024-12-09 12:01:01.075590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.360 [2024-12-09 12:01:01.075606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:08.360 [2024-12-09 12:01:01.075616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.075987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.075997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.076002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.076012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.076018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.076028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.076033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.076044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.076049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.076059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.076065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.076075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.361 [2024-12-09 12:01:01.076080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.076090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.361 [2024-12-09 12:01:01.076095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.076106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.361 [2024-12-09 12:01:01.076111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.076122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.361 [2024-12-09 12:01:01.076127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.076137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.361 [2024-12-09 12:01:01.076142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.076153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.361 [2024-12-09 12:01:01.076158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.076169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.361 [2024-12-09 12:01:01.076175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.076186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.361 [2024-12-09 12:01:01.076191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:08.361 [2024-12-09 12:01:01.076201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.361 [2024-12-09 12:01:01.076206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.076216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.076222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.076232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.076237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.076247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.076252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.076263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.076268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.076278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.076283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.076293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.076298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.076309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.076313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.076324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.076329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.076339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.076344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.076355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.076361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.076371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.076376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.076386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.076392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.076402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.076407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.076417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.076422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.076433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.076438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.076448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.076453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.076463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.362 [2024-12-09 12:01:01.076469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.076479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.076484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.076495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.076500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.076510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.076515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.076525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.076530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.076541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.076547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.077196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.077207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.077218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.077224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.077234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.077240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.077250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.077255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.077266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.077271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.077281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.077286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.077297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.077302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.077312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.077318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:08.362 [2024-12-09 12:01:01.077328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.362 [2024-12-09 12:01:01.077333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:08.379 [2024-12-09 12:01:01.077343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.379 [2024-12-09 12:01:01.077348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:08.379 [2024-12-09 12:01:01.077358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.379 [2024-12-09 12:01:01.077364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:08.379 [2024-12-09 12:01:01.077374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.379 [2024-12-09 12:01:01.077379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:08.379 [2024-12-09 12:01:01.077391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.379 [2024-12-09 12:01:01.077397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:08.379 [2024-12-09 12:01:01.077407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.379 [2024-12-09 12:01:01.077412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:08.379 [2024-12-09 12:01:01.077422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.379 [2024-12-09 12:01:01.077428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:08.379 [2024-12-09 12:01:01.077437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.379 [2024-12-09 12:01:01.077443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:08.379 [2024-12-09 12:01:01.077453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.379 [2024-12-09 12:01:01.077458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:08.379 [2024-12-09 12:01:01.077468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.379 [2024-12-09 12:01:01.077473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:08.379 [2024-12-09 12:01:01.077483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.379 [2024-12-09 12:01:01.077488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:08.379 [2024-12-09 12:01:01.077499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.379 [2024-12-09 12:01:01.077504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:08.379 [2024-12-09 12:01:01.077514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.379 [2024-12-09 12:01:01.077519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:08.379 [2024-12-09 12:01:01.077530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.379 [2024-12-09 12:01:01.077536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.077546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.077551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.077561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.077566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.077580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.077585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.077595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.077600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.077611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.077616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.077626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.077631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.077645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.077651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.077661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.077666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.077676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.077681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.077691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.077697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.077945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.077953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.077964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.077969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.077980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.077985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.077995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.078001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.078018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.078033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.078049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.078064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.078080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.078095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.078110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.380 [2024-12-09 12:01:01.078125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.380 [2024-12-09 12:01:01.078141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.380 [2024-12-09 12:01:01.078156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.380 [2024-12-09 12:01:01.078172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.380 [2024-12-09 12:01:01.078187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.380 [2024-12-09 12:01:01.078204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.380 [2024-12-09 12:01:01.078219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.380 [2024-12-09 12:01:01.078235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.380 [2024-12-09 12:01:01.078250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.380 [2024-12-09 12:01:01.078266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.380 [2024-12-09 12:01:01.078281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.380 [2024-12-09 12:01:01.078296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.380 [2024-12-09 12:01:01.078312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.380 [2024-12-09 12:01:01.078327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.380 [2024-12-09 12:01:01.078343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.078358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.078373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:08.380 [2024-12-09 12:01:01.078383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.380 [2024-12-09 12:01:01.078390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.381 [2024-12-09 12:01:01.078406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.381 [2024-12-09 12:01:01.078624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.381 [2024-12-09 12:01:01.078644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.078661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.381 [2024-12-09 12:01:01.078676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.078692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.078710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.078725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.078741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.078756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.078772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.078787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.078805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.078821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.078836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.078852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.078868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.078884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.078899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.078915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.078930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.078946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.078961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.078977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.078989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.078994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.079004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.079009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.079020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.079025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.079035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.079040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.079051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.079056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.079066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.079071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.079082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.084302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:08.381 [2024-12-09 12:01:01.084337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.381 [2024-12-09 12:01:01.084344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.084355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.382 [2024-12-09 12:01:01.084361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.084371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.382 [2024-12-09 12:01:01.084376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.084387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.382 [2024-12-09 12:01:01.084392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.084402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.382 [2024-12-09 12:01:01.084408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.084418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.382 [2024-12-09 12:01:01.084426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.084437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.084442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.084452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.084458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.084468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.084473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.084483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.084489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.084499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.084504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.084809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.084819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.084831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.084837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.084847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.084852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.084863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.084868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.084878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.084883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.084893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.084899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.084909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.084916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.084926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.084932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.084942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.084947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.084957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.084962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.084972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.084978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.084988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.084993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.085003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.085008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.085018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.085024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.085034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.085039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.085049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.085054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.085065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.085070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.085080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.085085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.085095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.085100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.085112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.382 [2024-12-09 12:01:01.085117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.085127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.085132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.085142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.085148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.085158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.085163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.085173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.085178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.085188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.085194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.085204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.085209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.085219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.085224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.085234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.085239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.085250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.382 [2024-12-09 12:01:01.085255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:08.382 [2024-12-09 12:01:01.085265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.383 [2024-12-09 12:01:01.085862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.383 [2024-12-09 12:01:01.085879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:08.383 [2024-12-09 12:01:01.085889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.085895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.085905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.085911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.085923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.085928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.085938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.085944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.085954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.085961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.085971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.085976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.085987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.085992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.086007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.086023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.086039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.086054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.086071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.086086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.086101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.384 [2024-12-09 12:01:01.086117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.384 [2024-12-09 12:01:01.086132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.384 [2024-12-09 12:01:01.086148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.384 [2024-12-09 12:01:01.086792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.384 [2024-12-09 12:01:01.086810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.384 [2024-12-09 12:01:01.086826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.086842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.384 [2024-12-09 12:01:01.086857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.086873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.086892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.086907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.086923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.086939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.086954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.086970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.086985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.086996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.087001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.087012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.087017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.087027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.087032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.087043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.087048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.087058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.087063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.087074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.087083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.087093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.087098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.087108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.087114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.087124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.087129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.087139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.384 [2024-12-09 12:01:01.087144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:08.384 [2024-12-09 12:01:01.087155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.385 [2024-12-09 12:01:01.087160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.385 [2024-12-09 12:01:01.087176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.385 [2024-12-09 12:01:01.087191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.385 [2024-12-09 12:01:01.087206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.385 [2024-12-09 12:01:01.087222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.385 [2024-12-09 12:01:01.087237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.385 [2024-12-09 12:01:01.087253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.385 [2024-12-09 12:01:01.087268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.385 [2024-12-09 12:01:01.087285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.385 [2024-12-09 12:01:01.087301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.385 [2024-12-09 12:01:01.087316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.385 [2024-12-09 12:01:01.087332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.385 [2024-12-09 12:01:01.087347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.385 [2024-12-09 12:01:01.087363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.087378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.087394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.087409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.087425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.087706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.087723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.087740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.087755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.087771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.087786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.087801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.087816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.087832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.087847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.087862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.087878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.087893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.087908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.087925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.087940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.087956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.087971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.087987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.087997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.385 [2024-12-09 12:01:01.088002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:08.385 [2024-12-09 12:01:01.088012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.385 [2024-12-09 12:01:01.088018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.386 [2024-12-09 12:01:01.088869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:08.386 [2024-12-09 12:01:01.088879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.387 [2024-12-09 12:01:01.088884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.088895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.387 [2024-12-09 12:01:01.088900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.088910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.387 [2024-12-09 12:01:01.088915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.088926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.387 [2024-12-09 12:01:01.088931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.088941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.387 [2024-12-09 12:01:01.088946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.088957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.387 [2024-12-09 12:01:01.088962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.088972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.387 [2024-12-09 12:01:01.088977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.088988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.387 [2024-12-09 12:01:01.088994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.387 [2024-12-09 12:01:01.089199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.387 [2024-12-09 12:01:01.089215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.387 [2024-12-09 12:01:01.089231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.387 [2024-12-09 12:01:01.089246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.387 [2024-12-09 12:01:01.089262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.387 [2024-12-09 12:01:01.089277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.387 [2024-12-09 12:01:01.089292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.387 [2024-12-09 12:01:01.089308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.387 [2024-12-09 12:01:01.089323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.387 [2024-12-09 12:01:01.089339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.387 [2024-12-09 12:01:01.089355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.387 [2024-12-09 12:01:01.089372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.387 [2024-12-09 12:01:01.089387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.387 [2024-12-09 12:01:01.089402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.387 [2024-12-09 12:01:01.089418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.387 [2024-12-09 12:01:01.089433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.387 [2024-12-09 12:01:01.089449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.387 [2024-12-09 12:01:01.089464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.387 [2024-12-09 12:01:01.089479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.387 [2024-12-09 12:01:01.089495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.387 [2024-12-09 12:01:01.089510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.387 [2024-12-09 12:01:01.089525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.387 [2024-12-09 12:01:01.089541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.387 [2024-12-09 12:01:01.089715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.387 [2024-12-09 12:01:01.089733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.387 [2024-12-09 12:01:01.089749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.387 [2024-12-09 12:01:01.089764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.387 [2024-12-09 12:01:01.089780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.387 [2024-12-09 12:01:01.089795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.387 [2024-12-09 12:01:01.089811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.387 [2024-12-09 12:01:01.089826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.387 [2024-12-09 12:01:01.089842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:08.387 [2024-12-09 12:01:01.089852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.089857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.089867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.089873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.089883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.089888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.089898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.089903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.089915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.089920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.089931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.089936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.089946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.089952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.089962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.089967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.089977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.089983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.089993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.089998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.090009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.090014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.090024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.090029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.090040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.090045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.090055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.090060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.090071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.090076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.090086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.090092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.090103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.090108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.093866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.093887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.093899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.093905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.093915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.093921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.093931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.093936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.093947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.093952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.093962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.093968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.093978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.093983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.093994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.093999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.094009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.094014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.094025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.094030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.094040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.388 [2024-12-09 12:01:01.094046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.094056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.388 [2024-12-09 12:01:01.094065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.094076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.388 [2024-12-09 12:01:01.094082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.094092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.388 [2024-12-09 12:01:01.094097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.094107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.388 [2024-12-09 12:01:01.094113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.094410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.388 [2024-12-09 12:01:01.094420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.094433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.388 [2024-12-09 12:01:01.094439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.094450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.388 [2024-12-09 12:01:01.094456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.094467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.388 [2024-12-09 12:01:01.094472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.094482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.388 [2024-12-09 12:01:01.094487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.094498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.388 [2024-12-09 12:01:01.094503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.094514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.388 [2024-12-09 12:01:01.094519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.094529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.388 [2024-12-09 12:01:01.094535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:08.388 [2024-12-09 12:01:01.094545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.389 [2024-12-09 12:01:01.094753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.094985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.094990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.095000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.095006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.095016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.095021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.095031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.095036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.095047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.095052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.095062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.095067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.095077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.095083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.095093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.095098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.095108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.095113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.095124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.095129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.095139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.095146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.095156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.095161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.095171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.389 [2024-12-09 12:01:01.095176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:08.389 [2024-12-09 12:01:01.095187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.390 [2024-12-09 12:01:01.095192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.390 [2024-12-09 12:01:01.095207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.390 [2024-12-09 12:01:01.095223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.390 [2024-12-09 12:01:01.095238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.390 [2024-12-09 12:01:01.095253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.390 [2024-12-09 12:01:01.095269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.390 [2024-12-09 12:01:01.095284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.390 [2024-12-09 12:01:01.095300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.390 [2024-12-09 12:01:01.095315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.390 [2024-12-09 12:01:01.095331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.390 [2024-12-09 12:01:01.095347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.390 [2024-12-09 12:01:01.095363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.390 [2024-12-09 12:01:01.095378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.390 [2024-12-09 12:01:01.095394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.390 [2024-12-09 12:01:01.095409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.390 [2024-12-09 12:01:01.095424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.390 [2024-12-09 12:01:01.095440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.390 [2024-12-09 12:01:01.095455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.390 [2024-12-09 12:01:01.095471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.390 [2024-12-09 12:01:01.095486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.390 [2024-12-09 12:01:01.095503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.390 [2024-12-09 12:01:01.095518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.390 [2024-12-09 12:01:01.095535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.390 [2024-12-09 12:01:01.095550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.390 [2024-12-09 12:01:01.095566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.390 [2024-12-09 12:01:01.095582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.390 [2024-12-09 12:01:01.095597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.390 [2024-12-09 12:01:01.095613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.390 [2024-12-09 12:01:01.095628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.390 [2024-12-09 12:01:01.095647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.390 [2024-12-09 12:01:01.095663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.390 [2024-12-09 12:01:01.095678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.390 [2024-12-09 12:01:01.095694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.390 [2024-12-09 12:01:01.095709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.390 [2024-12-09 12:01:01.095726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:08.390 [2024-12-09 12:01:01.095736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.095742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.095752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.391 [2024-12-09 12:01:01.095757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.095768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.391 [2024-12-09 12:01:01.095773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.391 [2024-12-09 12:01:01.096398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.391 [2024-12-09 12:01:01.096415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.391 [2024-12-09 12:01:01.096430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.391 [2024-12-09 12:01:01.096447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.391 [2024-12-09 12:01:01.096477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.391 [2024-12-09 12:01:01.096964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.391 [2024-12-09 12:01:01.096969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.096979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.392 [2024-12-09 12:01:01.096984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.096994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.392 [2024-12-09 12:01:01.097661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:08.392 [2024-12-09 12:01:01.097878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.392 [2024-12-09 12:01:01.097883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.097893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.097899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.097909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.097915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.097925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.097931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.097941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.097946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.393 [2024-12-09 12:01:01.098911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.393 [2024-12-09 12:01:01.098926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.393 [2024-12-09 12:01:01.098942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.393 [2024-12-09 12:01:01.098958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.393 [2024-12-09 12:01:01.098973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.393 [2024-12-09 12:01:01.098989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:08.393 [2024-12-09 12:01:01.098999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.393 [2024-12-09 12:01:01.099004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.394 [2024-12-09 12:01:01.099161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.394 [2024-12-09 12:01:01.099176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.394 [2024-12-09 12:01:01.099345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.394 [2024-12-09 12:01:01.099361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.394 [2024-12-09 12:01:01.099377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.394 [2024-12-09 12:01:01.099397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.394 [2024-12-09 12:01:01.099428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.394 [2024-12-09 12:01:01.099700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:08.394 [2024-12-09 12:01:01.099710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.395 [2024-12-09 12:01:01.099715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.099726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.395 [2024-12-09 12:01:01.099731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.099741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.395 [2024-12-09 12:01:01.099746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.099757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.395 [2024-12-09 12:01:01.099762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.099774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.395 [2024-12-09 12:01:01.099779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.099790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.395 [2024-12-09 12:01:01.099795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.099805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.395 [2024-12-09 12:01:01.099811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.099821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.395 [2024-12-09 12:01:01.099826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.099836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.395 [2024-12-09 12:01:01.099842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.099852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.395 [2024-12-09 12:01:01.099857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.099868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.395 [2024-12-09 12:01:01.099873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.099883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.395 [2024-12-09 12:01:01.099889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.099899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.395 [2024-12-09 12:01:01.099904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.099914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.395 [2024-12-09 12:01:01.099920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.099930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.395 [2024-12-09 12:01:01.099935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.099946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.099951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.099962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.099968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.100233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.100250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.100265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.100281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.100296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.100312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.100327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.100343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.100358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.100374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.100389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.100406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.100422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.100437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.100453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.100468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.100485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.100500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.100520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.100536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.100552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.395 [2024-12-09 12:01:01.100567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.395 [2024-12-09 12:01:01.100583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:08.395 [2024-12-09 12:01:01.100593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.100598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.100610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.100615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.100626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.100631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.100645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.100651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.100661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.100666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.100676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.100681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.100692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.100697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.100707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.100712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.100723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.100729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.100970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.100978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.100989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.100994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.101005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.101010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.101020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.101025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.101038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.101044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.101055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.101061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.101071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.101076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.101086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.101092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.101102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.101107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.101117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.101123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.101133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.101138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.101148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.101153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.101163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.101169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.101179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.101184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.101194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.101200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.101210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.101215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.104964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.104989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.105000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.105006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.105017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.105022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.105032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.105038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.105048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.105053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.105063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.105069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.105079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.105084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.105095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.105100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.105321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.105330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.105342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.105348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.105358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.105363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.105374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.105379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.105389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.105396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.105407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.105412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.396 [2024-12-09 12:01:01.105422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.396 [2024-12-09 12:01:01.105428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.397 [2024-12-09 12:01:01.105444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.397 [2024-12-09 12:01:01.105459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.397 [2024-12-09 12:01:01.105475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.397 [2024-12-09 12:01:01.105490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.397 [2024-12-09 12:01:01.105506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.397 [2024-12-09 12:01:01.105521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.397 [2024-12-09 12:01:01.105537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.397 [2024-12-09 12:01:01.105552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.105568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.105583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.105601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.105617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.105632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.105656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.105671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.105687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.105703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.105719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.105735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.105751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.105766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.105782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.105799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.397 [2024-12-09 12:01:01.105815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.397 [2024-12-09 12:01:01.105830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.397 [2024-12-09 12:01:01.105846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.397 [2024-12-09 12:01:01.105862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.397 [2024-12-09 12:01:01.105878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.397 [2024-12-09 12:01:01.105893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.105909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.397 [2024-12-09 12:01:01.105924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.105940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.105955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.105971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.105982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.105994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.106004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.106009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.106019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.106025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.106035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.106040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.106051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.106056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.106067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.106072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.106082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.106087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.106098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.106103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:08.397 [2024-12-09 12:01:01.106113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.397 [2024-12-09 12:01:01.106119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.398 [2024-12-09 12:01:01.106134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.398 [2024-12-09 12:01:01.106150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.398 [2024-12-09 12:01:01.106166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.398 [2024-12-09 12:01:01.106182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.398 [2024-12-09 12:01:01.106198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.398 [2024-12-09 12:01:01.106214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.398 [2024-12-09 12:01:01.106229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.398 [2024-12-09 12:01:01.106246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.398 [2024-12-09 12:01:01.106262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.398 [2024-12-09 12:01:01.106278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.398 [2024-12-09 12:01:01.106293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.398 [2024-12-09 12:01:01.106309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.398 [2024-12-09 12:01:01.106325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.398 [2024-12-09 12:01:01.106340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.398 [2024-12-09 12:01:01.106356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.398 [2024-12-09 12:01:01.106371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.398 [2024-12-09 12:01:01.106388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.398 [2024-12-09 12:01:01.106403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.398 [2024-12-09 12:01:01.106419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.398 [2024-12-09 12:01:01.106435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.398 [2024-12-09 12:01:01.106450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.398 [2024-12-09 12:01:01.106466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.398 [2024-12-09 12:01:01.106481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.398 [2024-12-09 12:01:01.106497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.398 [2024-12-09 12:01:01.106513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.398 [2024-12-09 12:01:01.106529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.398 [2024-12-09 12:01:01.106546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.398 [2024-12-09 12:01:01.106561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.398 [2024-12-09 12:01:01.106578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.398 [2024-12-09 12:01:01.106594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.398 [2024-12-09 12:01:01.106610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.398 [2024-12-09 12:01:01.106625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.398 [2024-12-09 12:01:01.106644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.398 [2024-12-09 12:01:01.106660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.398 [2024-12-09 12:01:01.106676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.398 [2024-12-09 12:01:01.106691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.398 [2024-12-09 12:01:01.106707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.398 [2024-12-09 12:01:01.106723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:08.398 [2024-12-09 12:01:01.106733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.106738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.106749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.106754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.106764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.106772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.106783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.106788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.106798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.106803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.106813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.106819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.106829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.399 [2024-12-09 12:01:01.106834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.106844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.106849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.106860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.106865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.106875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.106880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.106890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.106896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.106906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.106911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.106921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.106927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.106937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.106943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.106953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.106960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.107660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.107670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.107682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.107688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.107698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.107703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.107713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.107719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.107729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.107734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.107744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.107750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.107760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.107767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.107777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.107783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.107794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.107799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.107810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.107816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.107826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.107832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.107842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.107847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.107860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.107865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.107875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.107880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.107891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.107896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.107906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.107911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.107921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.107926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.107937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.107942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.107952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.107958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.107968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.107973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.107983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.107988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.107999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.108004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.108014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.108019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.108030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.108035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.108782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.108791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.108812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.108818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.108830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.399 [2024-12-09 12:01:01.108835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:08.399 [2024-12-09 12:01:01.108846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.400 [2024-12-09 12:01:01.108851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.108863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.400 [2024-12-09 12:01:01.108868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.108880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.400 [2024-12-09 12:01:01.108885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.108896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.400 [2024-12-09 12:01:01.108901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.108913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.400 [2024-12-09 12:01:01.108918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.108930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.400 [2024-12-09 12:01:01.108935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.108946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.400 [2024-12-09 12:01:01.108951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.108963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.400 [2024-12-09 12:01:01.108968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.108980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.400 [2024-12-09 12:01:01.108985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.108996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.400 [2024-12-09 12:01:01.109003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.400 [2024-12-09 12:01:01.109020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.400 [2024-12-09 12:01:01.109037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.400 [2024-12-09 12:01:01.109053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.400 [2024-12-09 12:01:01.109070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.400 [2024-12-09 12:01:01.109118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.400 [2024-12-09 12:01:01.109137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.400 [2024-12-09 12:01:01.109155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.400 [2024-12-09 12:01:01.109172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.400 [2024-12-09 12:01:01.109190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.400 [2024-12-09 12:01:01.109207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.400 [2024-12-09 12:01:01.109225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.400 [2024-12-09 12:01:01.109243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.400 [2024-12-09 12:01:01.109261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.400 [2024-12-09 12:01:01.109278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.400 [2024-12-09 12:01:01.109296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.400 [2024-12-09 12:01:01.109313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.400 [2024-12-09 12:01:01.109330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.400 [2024-12-09 12:01:01.109348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.400 [2024-12-09 12:01:01.109365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.400 [2024-12-09 12:01:01.109383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.400 [2024-12-09 12:01:01.109400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.400 [2024-12-09 12:01:01.109418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.400 [2024-12-09 12:01:01.109435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.400 [2024-12-09 12:01:01.109453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.400 [2024-12-09 12:01:01.109472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.400 [2024-12-09 12:01:01.109489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.400 [2024-12-09 12:01:01.109507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.400 [2024-12-09 12:01:01.109525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.400 [2024-12-09 12:01:01.109543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.400 [2024-12-09 12:01:01.109561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.400 [2024-12-09 12:01:01.109579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.400 [2024-12-09 12:01:01.109597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.400 [2024-12-09 12:01:01.109614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:08.400 [2024-12-09 12:01:01.109627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.109632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.109647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.109653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.109665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.109671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.109684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.109689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.109702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.109707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.109720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.109725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.109737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.109742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.109755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.109760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.109772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.109777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.109790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.109795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.109807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.109813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.109825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.109830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.109842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.109847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.109860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.109865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.109878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.109883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.109895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.109902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.109915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.109920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.109932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.109937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.109950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.109955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.109967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.109973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.109985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.109990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.110003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.110008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.110020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.110025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.110038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.110043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.110056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.401 [2024-12-09 12:01:01.110061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.110161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.401 [2024-12-09 12:01:01.110168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.110185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.401 [2024-12-09 12:01:01.110191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.110206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.401 [2024-12-09 12:01:01.110212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.110227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.401 [2024-12-09 12:01:01.110233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.110248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.401 [2024-12-09 12:01:01.110253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.110268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.401 [2024-12-09 12:01:01.110273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.110288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.401 [2024-12-09 12:01:01.110293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.110309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.401 [2024-12-09 12:01:01.110314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.110329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.401 [2024-12-09 12:01:01.110336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.110350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.401 [2024-12-09 12:01:01.110356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.110371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.401 [2024-12-09 12:01:01.110376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.110391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.401 [2024-12-09 12:01:01.110396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.110411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.401 [2024-12-09 12:01:01.110416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.110431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.401 [2024-12-09 12:01:01.110436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.110451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.401 [2024-12-09 12:01:01.110457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.110473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.401 [2024-12-09 12:01:01.110479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.110493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.401 [2024-12-09 12:01:01.110499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.110513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.401 [2024-12-09 12:01:01.110519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:08.401 [2024-12-09 12:01:01.110534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.402 [2024-12-09 12:01:01.110539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:08.402 [2024-12-09 12:01:01.110554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.402 [2024-12-09 12:01:01.110559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:08.402 [2024-12-09 12:01:01.110574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.402 [2024-12-09 12:01:01.110579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:08.402 [2024-12-09 12:01:01.110594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.402 [2024-12-09 12:01:01.110599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:08.402 [2024-12-09 12:01:01.110614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.402 [2024-12-09 12:01:01.110619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:08.402 [2024-12-09 12:01:01.110634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.402 [2024-12-09 12:01:01.110644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:08.402 [2024-12-09 12:01:01.110658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.402 [2024-12-09 12:01:01.110664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:08.402 [2024-12-09 12:01:01.110678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.402 [2024-12-09 12:01:01.110684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:08.402 [2024-12-09 12:01:01.110699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.402 [2024-12-09 12:01:01.110704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:08.402 [2024-12-09 12:01:01.110720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.402 [2024-12-09 12:01:01.110725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:08.402 [2024-12-09 12:01:01.110740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.402 [2024-12-09 12:01:01.110746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:08.402 [2024-12-09 12:01:01.110761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.402 [2024-12-09 12:01:01.110766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:08.402 [2024-12-09 12:01:01.110782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.402 [2024-12-09 12:01:01.110787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:08.402 [2024-12-09 12:01:01.110802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.402 [2024-12-09 12:01:01.110808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.402 [2024-12-09 12:01:01.110824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.402 [2024-12-09 12:01:01.110830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:08.402 12613.75 IOPS, 49.27 MiB/s [2024-12-09T11:01:16.288Z] 11643.46 IOPS, 45.48 MiB/s [2024-12-09T11:01:16.288Z] 10811.79 IOPS, 42.23 MiB/s [2024-12-09T11:01:16.288Z] 10141.47 IOPS, 39.62 MiB/s [2024-12-09T11:01:16.288Z] 10317.81 IOPS, 40.30 MiB/s [2024-12-09T11:01:16.288Z] 10457.65 IOPS, 40.85 MiB/s [2024-12-09T11:01:16.288Z] 10819.89 IOPS, 42.27 MiB/s [2024-12-09T11:01:16.288Z] 11169.68 IOPS, 43.63 MiB/s [2024-12-09T11:01:16.288Z] 11360.20 IOPS, 44.38 MiB/s [2024-12-09T11:01:16.288Z] 11427.81 IOPS, 44.64 MiB/s [2024-12-09T11:01:16.288Z] 11489.59 IOPS, 44.88 MiB/s [2024-12-09T11:01:16.288Z] 11710.74 IOPS, 45.75 MiB/s [2024-12-09T11:01:16.288Z] 11936.17 IOPS, 46.63 MiB/s [2024-12-09T11:01:16.288Z] [2024-12-09 12:01:13.770652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.402 [2024-12-09 12:01:13.770688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:08.402 [2024-12-09 12:01:13.770717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.402 [2024-12-09 12:01:13.770724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:08.402 [2024-12-09 12:01:13.770735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.402 [2024-12-09 12:01:13.770741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:08.402 [2024-12-09 12:01:13.773646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.402 [2024-12-09 12:01:13.773664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:08.402 [2024-12-09 12:01:13.773678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.402 [2024-12-09 12:01:13.773683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:08.402 [2024-12-09 12:01:13.774500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.402 [2024-12-09 12:01:13.774510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:08.402 12067.48 IOPS, 47.14 MiB/s [2024-12-09T11:01:16.288Z] 12097.00 IOPS, 47.25 MiB/s [2024-12-09T11:01:16.288Z] Received shutdown signal, test time was about 26.789926 seconds 00:26:08.402 00:26:08.402 Latency(us) 00:26:08.402 [2024-12-09T11:01:16.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.402 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:08.402 Verification LBA range: start 0x0 length 0x4000 00:26:08.402 Nvme0n1 : 26.79 12120.13 47.34 0.00 0.00 10542.91 580.27 3075822.93 00:26:08.402 [2024-12-09T11:01:16.288Z] =================================================================================================================== 00:26:08.402 [2024-12-09T11:01:16.288Z] Total : 12120.13 47.34 0.00 0.00 10542.91 580.27 3075822.93 00:26:08.402 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # sync 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # set +e 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # for i in {1..20} 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:26:08.667 rmmod nvme_tcp 00:26:08.667 rmmod nvme_fabrics 00:26:08.667 rmmod nvme_keyring 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # set -e 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@130 -- # return 0 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 173513 ']' 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 173513 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 173513 ']' 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 173513 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 173513 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 173513' 00:26:08.667 killing process with pid 173513 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 173513 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 173513 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # iptr 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # remove_spdk_ns 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:08.667 12:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:26:11.210 00:26:11.210 real 0m41.099s 00:26:11.210 user 1m46.072s 00:26:11.210 sys 0m11.657s 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:11.210 ************************************ 00:26:11.210 END TEST nvmf_host_multipath_status 00:26:11.210 ************************************ 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.210 ************************************ 00:26:11.210 START TEST nvmf_discovery_remove_ifc 00:26:11.210 ************************************ 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:11.210 * Looking for test storage... 00:26:11.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:11.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.210 --rc genhtml_branch_coverage=1 00:26:11.210 --rc genhtml_function_coverage=1 00:26:11.210 --rc genhtml_legend=1 00:26:11.210 --rc geninfo_all_blocks=1 00:26:11.210 --rc geninfo_unexecuted_blocks=1 00:26:11.210 00:26:11.210 ' 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:11.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.210 --rc genhtml_branch_coverage=1 00:26:11.210 --rc genhtml_function_coverage=1 00:26:11.210 --rc genhtml_legend=1 00:26:11.210 --rc geninfo_all_blocks=1 00:26:11.210 --rc geninfo_unexecuted_blocks=1 00:26:11.210 00:26:11.210 ' 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:11.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.210 --rc genhtml_branch_coverage=1 00:26:11.210 --rc genhtml_function_coverage=1 00:26:11.210 --rc genhtml_legend=1 00:26:11.210 --rc geninfo_all_blocks=1 00:26:11.210 --rc geninfo_unexecuted_blocks=1 00:26:11.210 00:26:11.210 ' 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:11.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.210 --rc genhtml_branch_coverage=1 00:26:11.210 --rc genhtml_function_coverage=1 00:26:11.210 --rc genhtml_legend=1 00:26:11.210 --rc geninfo_all_blocks=1 00:26:11.210 --rc geninfo_unexecuted_blocks=1 00:26:11.210 00:26:11.210 ' 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:11.210 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # : 0 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:26:11.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@56 -- # have_pci_nics=0 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # xtrace_disable 00:26:11.211 12:01:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_devs=() 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_devs 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_net_devs=() 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # pci_drivers=() 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # local -A pci_drivers 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # net_devs=() 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga net_devs 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # e810=() 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga e810 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # x722=() 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga x722 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@323 -- # mlx=() 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@323 -- # local -ga mlx 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:19.356 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:19.356 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:19.356 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:19.357 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:19.357 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # is_hw=yes 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:26:19.357 12:01:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:26:19.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:19.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:26:19.357 00:26:19.357 --- 10.0.0.2 ping statistics --- 00:26:19.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.357 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:19.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:19.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:26:19.357 00:26:19.357 --- 10.0.0.1 ping statistics --- 00:26:19.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.357 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # return 0 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=183760 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 183760 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 183760 ']' 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:19.357 12:01:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.357 [2024-12-09 12:01:26.351551] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:26:19.357 [2024-12-09 12:01:26.351620] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.357 [2024-12-09 12:01:26.449412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.357 [2024-12-09 12:01:26.498623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.357 [2024-12-09 12:01:26.498681] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.357 [2024-12-09 12:01:26.498690] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:19.357 [2024-12-09 12:01:26.498697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:19.357 [2024-12-09 12:01:26.498704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.357 [2024-12-09 12:01:26.499445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.357 12:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:19.357 12:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:19.357 12:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:19.357 12:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:19.357 12:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.357 12:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:19.357 12:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:19.357 12:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.357 12:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.357 [2024-12-09 12:01:27.222963] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:19.358 [2024-12-09 12:01:27.231180] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:19.619 null0 00:26:19.619 [2024-12-09 12:01:27.263160] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:19.619 12:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.619 12:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=184122 00:26:19.619 12:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 184122 /tmp/host.sock 00:26:19.619 12:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:19.619 12:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 184122 ']' 00:26:19.619 12:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:19.619 12:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:19.619 12:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:19.619 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:19.619 12:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:19.619 12:01:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.619 [2024-12-09 12:01:27.346675] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:26:19.619 [2024-12-09 12:01:27.346738] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid184122 ] 00:26:19.619 [2024-12-09 12:01:27.436112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.619 [2024-12-09 12:01:27.488616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.563 12:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.563 12:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:20.563 12:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:20.563 12:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:20.563 12:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.563 12:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.563 12:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.563 12:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:20.563 12:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.563 12:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.563 12:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.563 12:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:20.563 12:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.563 12:01:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.544 [2024-12-09 12:01:29.332163] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:21.544 [2024-12-09 12:01:29.332194] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:21.544 [2024-12-09 12:01:29.332209] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:21.822 [2024-12-09 12:01:29.460607] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:21.822 [2024-12-09 12:01:29.642032] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:21.822 [2024-12-09 12:01:29.643210] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x22feed0:1 started. 00:26:21.822 [2024-12-09 12:01:29.645038] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:21.822 [2024-12-09 12:01:29.645101] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:21.822 [2024-12-09 12:01:29.645130] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:21.822 [2024-12-09 12:01:29.645148] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:21.822 [2024-12-09 12:01:29.645172] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:21.822 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.822 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:21.822 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:21.823 [2024-12-09 12:01:29.651149] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x22feed0 was disconnected and freed. delete nvme_qpair. 00:26:21.823 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.823 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.823 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:21.823 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.823 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:21.823 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:21.823 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.823 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:21.823 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:22.108 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:22.108 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:22.108 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:22.108 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.108 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:22.108 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.108 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:22.108 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:22.108 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:22.108 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.108 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:22.108 12:01:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:23.055 12:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:23.055 12:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.055 12:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:23.055 12:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.055 12:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:23.055 12:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:23.055 12:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:23.055 12:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.315 12:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:23.315 12:01:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:24.256 12:01:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:24.256 12:01:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:24.256 12:01:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:24.256 12:01:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.256 12:01:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:24.256 12:01:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:24.256 12:01:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:24.256 12:01:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.256 12:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:24.256 12:01:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:25.200 12:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:25.200 12:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.200 12:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:25.200 12:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.200 12:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:25.200 12:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.200 12:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:25.200 12:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.200 12:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:25.200 12:01:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:26.583 12:01:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:26.583 12:01:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:26.583 12:01:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.583 12:01:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:26.583 12:01:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.583 12:01:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.583 12:01:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:26.583 12:01:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.583 12:01:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:26.583 12:01:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:27.524 [2024-12-09 12:01:35.085184] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:27.524 [2024-12-09 12:01:35.085224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.524 [2024-12-09 12:01:35.085234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.524 [2024-12-09 12:01:35.085242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.524 [2024-12-09 12:01:35.085248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.524 [2024-12-09 12:01:35.085253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.524 [2024-12-09 12:01:35.085263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.524 [2024-12-09 12:01:35.085268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.524 [2024-12-09 12:01:35.085274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.524 [2024-12-09 12:01:35.085279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.524 [2024-12-09 12:01:35.085284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.524 [2024-12-09 12:01:35.085290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22db6d0 is same with the state(6) to be set 00:26:27.524 [2024-12-09 12:01:35.095205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22db6d0 (9): Bad file descriptor 00:26:27.524 [2024-12-09 12:01:35.105242] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:27.524 [2024-12-09 12:01:35.105250] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:27.524 [2024-12-09 12:01:35.105255] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:27.524 [2024-12-09 12:01:35.105259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:27.524 [2024-12-09 12:01:35.105278] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:27.524 12:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:27.524 12:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:27.525 12:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:27.525 12:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:27.525 12:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.525 12:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:27.525 12:01:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.462 [2024-12-09 12:01:36.156722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:28.462 [2024-12-09 12:01:36.156817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22db6d0 with addr=10.0.0.2, port=4420 00:26:28.462 [2024-12-09 12:01:36.156848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22db6d0 is same with the state(6) to be set 00:26:28.462 [2024-12-09 12:01:36.156907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22db6d0 (9): Bad file descriptor 00:26:28.462 [2024-12-09 12:01:36.158028] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:28.462 [2024-12-09 12:01:36.158099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:28.462 [2024-12-09 12:01:36.158122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:28.462 [2024-12-09 12:01:36.158147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:28.462 [2024-12-09 12:01:36.158168] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:28.462 [2024-12-09 12:01:36.158184] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:28.462 [2024-12-09 12:01:36.158197] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:28.462 [2024-12-09 12:01:36.158231] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:28.462 [2024-12-09 12:01:36.158245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:28.462 12:01:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.462 12:01:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:28.462 12:01:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:29.401 [2024-12-09 12:01:37.160668] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:29.401 [2024-12-09 12:01:37.160685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:29.401 [2024-12-09 12:01:37.160694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:29.401 [2024-12-09 12:01:37.160699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:29.401 [2024-12-09 12:01:37.160705] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:29.401 [2024-12-09 12:01:37.160710] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:29.401 [2024-12-09 12:01:37.160714] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:29.401 [2024-12-09 12:01:37.160717] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:29.401 [2024-12-09 12:01:37.160736] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:29.401 [2024-12-09 12:01:37.160755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.401 [2024-12-09 12:01:37.160763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.401 [2024-12-09 12:01:37.160771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.401 [2024-12-09 12:01:37.160776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.401 [2024-12-09 12:01:37.160782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.401 [2024-12-09 12:01:37.160787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.401 [2024-12-09 12:01:37.160792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.401 [2024-12-09 12:01:37.160797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.401 [2024-12-09 12:01:37.160803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.401 [2024-12-09 12:01:37.160808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.401 [2024-12-09 12:01:37.160813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:29.401 [2024-12-09 12:01:37.161097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cadf0 (9): Bad file descriptor 00:26:29.401 [2024-12-09 12:01:37.162107] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:29.401 [2024-12-09 12:01:37.162115] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:29.401 12:01:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:29.401 12:01:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:29.401 12:01:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:29.401 12:01:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.401 12:01:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:29.401 12:01:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.401 12:01:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:29.401 12:01:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.401 12:01:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:29.401 12:01:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:29.401 12:01:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:29.661 12:01:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:29.661 12:01:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:29.661 12:01:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:29.661 12:01:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:29.661 12:01:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.661 12:01:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:29.661 12:01:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.661 12:01:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:29.661 12:01:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.661 12:01:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:29.662 12:01:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:30.601 12:01:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:30.601 12:01:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:30.601 12:01:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.601 12:01:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:30.601 12:01:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.601 12:01:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:30.601 12:01:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.601 12:01:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.601 12:01:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:30.601 12:01:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:31.538 [2024-12-09 12:01:39.177090] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:31.538 [2024-12-09 12:01:39.177106] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:31.538 [2024-12-09 12:01:39.177116] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:31.538 [2024-12-09 12:01:39.263353] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:31.798 12:01:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:31.798 12:01:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.798 12:01:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:31.798 12:01:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.798 12:01:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:31.798 12:01:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.798 12:01:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:31.798 12:01:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.798 12:01:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:31.798 12:01:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:31.798 [2024-12-09 12:01:39.488498] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:31.798 [2024-12-09 12:01:39.489292] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x2308640:1 started. 00:26:31.798 [2024-12-09 12:01:39.490200] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:31.798 [2024-12-09 12:01:39.490228] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:31.798 [2024-12-09 12:01:39.490243] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:31.798 [2024-12-09 12:01:39.490254] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:31.798 [2024-12-09 12:01:39.490259] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:31.798 [2024-12-09 12:01:39.536275] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x2308640 was disconnected and freed. delete nvme_qpair. 00:26:32.738 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:32.738 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.738 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:32.738 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.738 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:32.738 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.738 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:32.738 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.738 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:32.738 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:32.738 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 184122 00:26:32.738 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 184122 ']' 00:26:32.739 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 184122 00:26:32.739 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:32.739 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:32.739 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 184122 00:26:32.739 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:32.739 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:32.739 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 184122' 00:26:32.739 killing process with pid 184122 00:26:32.739 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 184122 00:26:32.739 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 184122 00:26:32.998 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:32.998 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:32.998 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # sync 00:26:32.998 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:26:32.998 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # set +e 00:26:32.998 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # for i in {1..20} 00:26:32.998 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:26:32.998 rmmod nvme_tcp 00:26:32.998 rmmod nvme_fabrics 00:26:32.998 rmmod nvme_keyring 00:26:32.998 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:26:32.998 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # set -e 00:26:32.998 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@130 -- # return 0 00:26:32.998 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 183760 ']' 00:26:32.998 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 183760 00:26:32.998 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 183760 ']' 00:26:32.998 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 183760 00:26:32.998 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:32.998 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:32.998 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 183760 00:26:32.998 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:32.998 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:32.998 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 183760' 00:26:32.998 killing process with pid 183760 00:26:32.998 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 183760 00:26:32.998 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 183760 00:26:33.258 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:33.259 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:33.259 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:33.259 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # iptr 00:26:33.259 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:26:33.259 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:33.259 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:26:33.259 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:33.259 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # remove_spdk_ns 00:26:33.259 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.259 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.259 12:01:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.166 12:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:26:35.166 00:26:35.166 real 0m24.364s 00:26:35.166 user 0m29.537s 00:26:35.166 sys 0m7.103s 00:26:35.166 12:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:35.166 12:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.166 ************************************ 00:26:35.166 END TEST nvmf_discovery_remove_ifc 00:26:35.166 ************************************ 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.428 ************************************ 00:26:35.428 START TEST nvmf_identify_kernel_target 00:26:35.428 ************************************ 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:35.428 * Looking for test storage... 00:26:35.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:35.428 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:35.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.690 --rc genhtml_branch_coverage=1 00:26:35.690 --rc genhtml_function_coverage=1 00:26:35.690 --rc genhtml_legend=1 00:26:35.690 --rc geninfo_all_blocks=1 00:26:35.690 --rc geninfo_unexecuted_blocks=1 00:26:35.690 00:26:35.690 ' 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:35.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.690 --rc genhtml_branch_coverage=1 00:26:35.690 --rc genhtml_function_coverage=1 00:26:35.690 --rc genhtml_legend=1 00:26:35.690 --rc geninfo_all_blocks=1 00:26:35.690 --rc geninfo_unexecuted_blocks=1 00:26:35.690 00:26:35.690 ' 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:35.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.690 --rc genhtml_branch_coverage=1 00:26:35.690 --rc genhtml_function_coverage=1 00:26:35.690 --rc genhtml_legend=1 00:26:35.690 --rc geninfo_all_blocks=1 00:26:35.690 --rc geninfo_unexecuted_blocks=1 00:26:35.690 00:26:35.690 ' 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:35.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.690 --rc genhtml_branch_coverage=1 00:26:35.690 --rc genhtml_function_coverage=1 00:26:35.690 --rc genhtml_legend=1 00:26:35.690 --rc geninfo_all_blocks=1 00:26:35.690 --rc geninfo_unexecuted_blocks=1 00:26:35.690 00:26:35.690 ' 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:35.690 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # : 0 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:26:35.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@56 -- # have_pci_nics=0 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # xtrace_disable 00:26:35.691 12:01:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_devs=() 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_devs 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_net_devs=() 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # pci_drivers=() 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # local -A pci_drivers 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # net_devs=() 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga net_devs 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # e810=() 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga e810 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # x722=() 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga x722 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@323 -- # mlx=() 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@323 -- # local -ga mlx 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:43.836 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:43.836 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:43.836 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:43.836 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # is_hw=yes 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:26:43.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:43.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:26:43.836 00:26:43.836 --- 10.0.0.2 ping statistics --- 00:26:43.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.836 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:26:43.836 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:43.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:43.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:26:43.837 00:26:43.837 --- 10.0.0.1 ping statistics --- 00:26:43.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.837 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # return 0 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:43.837 12:01:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:46.390 Waiting for block devices as requested 00:26:46.390 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:46.651 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:46.651 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:46.651 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:46.913 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:46.913 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:46.913 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:47.175 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:47.175 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:47.435 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:47.435 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:47.435 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:47.696 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:47.696 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:47.696 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:47.958 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:47.958 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:48.220 No valid GPT data, bailing 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:26:48.220 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:48.482 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:26:48.482 00:26:48.482 Discovery Log Number of Records 2, Generation counter 2 00:26:48.482 =====Discovery Log Entry 0====== 00:26:48.482 trtype: tcp 00:26:48.482 adrfam: ipv4 00:26:48.482 subtype: current discovery subsystem 00:26:48.482 treq: not specified, sq flow control disable supported 00:26:48.482 portid: 1 00:26:48.482 trsvcid: 4420 00:26:48.482 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:48.482 traddr: 10.0.0.1 00:26:48.482 eflags: none 00:26:48.482 sectype: none 00:26:48.482 =====Discovery Log Entry 1====== 00:26:48.482 trtype: tcp 00:26:48.482 adrfam: ipv4 00:26:48.482 subtype: nvme subsystem 00:26:48.482 treq: not specified, sq flow control disable supported 00:26:48.482 portid: 1 00:26:48.482 trsvcid: 4420 00:26:48.482 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:48.482 traddr: 10.0.0.1 00:26:48.482 eflags: none 00:26:48.482 sectype: none 00:26:48.482 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:48.482 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:48.482 ===================================================== 00:26:48.482 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:48.482 ===================================================== 00:26:48.482 Controller Capabilities/Features 00:26:48.482 ================================ 00:26:48.482 Vendor ID: 0000 00:26:48.482 Subsystem Vendor ID: 0000 00:26:48.482 Serial Number: 533d15b4dc2f2640518d 00:26:48.482 Model Number: Linux 00:26:48.482 Firmware Version: 6.8.9-20 00:26:48.482 Recommended Arb Burst: 0 00:26:48.482 IEEE OUI Identifier: 00 00 00 00:26:48.482 Multi-path I/O 00:26:48.482 May have multiple subsystem ports: No 00:26:48.482 May have multiple controllers: No 00:26:48.482 Associated with SR-IOV VF: No 00:26:48.482 Max Data Transfer Size: Unlimited 00:26:48.482 Max Number of Namespaces: 0 00:26:48.482 Max Number of I/O Queues: 1024 00:26:48.482 NVMe Specification Version (VS): 1.3 00:26:48.482 NVMe Specification Version (Identify): 1.3 00:26:48.482 Maximum Queue Entries: 1024 00:26:48.483 Contiguous Queues Required: No 00:26:48.483 Arbitration Mechanisms Supported 00:26:48.483 Weighted Round Robin: Not Supported 00:26:48.483 Vendor Specific: Not Supported 00:26:48.483 Reset Timeout: 7500 ms 00:26:48.483 Doorbell Stride: 4 bytes 00:26:48.483 NVM Subsystem Reset: Not Supported 00:26:48.483 Command Sets Supported 00:26:48.483 NVM Command Set: Supported 00:26:48.483 Boot Partition: Not Supported 00:26:48.483 Memory Page Size Minimum: 4096 bytes 00:26:48.483 Memory Page Size Maximum: 4096 bytes 00:26:48.483 Persistent Memory Region: Not Supported 00:26:48.483 Optional Asynchronous Events Supported 00:26:48.483 Namespace Attribute Notices: Not Supported 00:26:48.483 Firmware Activation Notices: Not Supported 00:26:48.483 ANA Change Notices: Not Supported 00:26:48.483 PLE Aggregate Log Change Notices: Not Supported 00:26:48.483 LBA Status Info Alert Notices: Not Supported 00:26:48.483 EGE Aggregate Log Change Notices: Not Supported 00:26:48.483 Normal NVM Subsystem Shutdown event: Not Supported 00:26:48.483 Zone Descriptor Change Notices: Not Supported 00:26:48.483 Discovery Log Change Notices: Supported 00:26:48.483 Controller Attributes 00:26:48.483 128-bit Host Identifier: Not Supported 00:26:48.483 Non-Operational Permissive Mode: Not Supported 00:26:48.483 NVM Sets: Not Supported 00:26:48.483 Read Recovery Levels: Not Supported 00:26:48.483 Endurance Groups: Not Supported 00:26:48.483 Predictable Latency Mode: Not Supported 00:26:48.483 Traffic Based Keep ALive: Not Supported 00:26:48.483 Namespace Granularity: Not Supported 00:26:48.483 SQ Associations: Not Supported 00:26:48.483 UUID List: Not Supported 00:26:48.483 Multi-Domain Subsystem: Not Supported 00:26:48.483 Fixed Capacity Management: Not Supported 00:26:48.483 Variable Capacity Management: Not Supported 00:26:48.483 Delete Endurance Group: Not Supported 00:26:48.483 Delete NVM Set: Not Supported 00:26:48.483 Extended LBA Formats Supported: Not Supported 00:26:48.483 Flexible Data Placement Supported: Not Supported 00:26:48.483 00:26:48.483 Controller Memory Buffer Support 00:26:48.483 ================================ 00:26:48.483 Supported: No 00:26:48.483 00:26:48.483 Persistent Memory Region Support 00:26:48.483 ================================ 00:26:48.483 Supported: No 00:26:48.483 00:26:48.483 Admin Command Set Attributes 00:26:48.483 ============================ 00:26:48.483 Security Send/Receive: Not Supported 00:26:48.483 Format NVM: Not Supported 00:26:48.483 Firmware Activate/Download: Not Supported 00:26:48.483 Namespace Management: Not Supported 00:26:48.483 Device Self-Test: Not Supported 00:26:48.483 Directives: Not Supported 00:26:48.483 NVMe-MI: Not Supported 00:26:48.483 Virtualization Management: Not Supported 00:26:48.483 Doorbell Buffer Config: Not Supported 00:26:48.483 Get LBA Status Capability: Not Supported 00:26:48.483 Command & Feature Lockdown Capability: Not Supported 00:26:48.483 Abort Command Limit: 1 00:26:48.483 Async Event Request Limit: 1 00:26:48.483 Number of Firmware Slots: N/A 00:26:48.483 Firmware Slot 1 Read-Only: N/A 00:26:48.483 Firmware Activation Without Reset: N/A 00:26:48.483 Multiple Update Detection Support: N/A 00:26:48.483 Firmware Update Granularity: No Information Provided 00:26:48.483 Per-Namespace SMART Log: No 00:26:48.483 Asymmetric Namespace Access Log Page: Not Supported 00:26:48.483 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:48.483 Command Effects Log Page: Not Supported 00:26:48.483 Get Log Page Extended Data: Supported 00:26:48.483 Telemetry Log Pages: Not Supported 00:26:48.483 Persistent Event Log Pages: Not Supported 00:26:48.483 Supported Log Pages Log Page: May Support 00:26:48.483 Commands Supported & Effects Log Page: Not Supported 00:26:48.483 Feature Identifiers & Effects Log Page:May Support 00:26:48.483 NVMe-MI Commands & Effects Log Page: May Support 00:26:48.483 Data Area 4 for Telemetry Log: Not Supported 00:26:48.483 Error Log Page Entries Supported: 1 00:26:48.483 Keep Alive: Not Supported 00:26:48.483 00:26:48.483 NVM Command Set Attributes 00:26:48.483 ========================== 00:26:48.483 Submission Queue Entry Size 00:26:48.483 Max: 1 00:26:48.483 Min: 1 00:26:48.483 Completion Queue Entry Size 00:26:48.483 Max: 1 00:26:48.483 Min: 1 00:26:48.483 Number of Namespaces: 0 00:26:48.483 Compare Command: Not Supported 00:26:48.483 Write Uncorrectable Command: Not Supported 00:26:48.483 Dataset Management Command: Not Supported 00:26:48.483 Write Zeroes Command: Not Supported 00:26:48.483 Set Features Save Field: Not Supported 00:26:48.483 Reservations: Not Supported 00:26:48.483 Timestamp: Not Supported 00:26:48.483 Copy: Not Supported 00:26:48.483 Volatile Write Cache: Not Present 00:26:48.483 Atomic Write Unit (Normal): 1 00:26:48.483 Atomic Write Unit (PFail): 1 00:26:48.483 Atomic Compare & Write Unit: 1 00:26:48.483 Fused Compare & Write: Not Supported 00:26:48.483 Scatter-Gather List 00:26:48.483 SGL Command Set: Supported 00:26:48.483 SGL Keyed: Not Supported 00:26:48.483 SGL Bit Bucket Descriptor: Not Supported 00:26:48.483 SGL Metadata Pointer: Not Supported 00:26:48.483 Oversized SGL: Not Supported 00:26:48.483 SGL Metadata Address: Not Supported 00:26:48.483 SGL Offset: Supported 00:26:48.483 Transport SGL Data Block: Not Supported 00:26:48.483 Replay Protected Memory Block: Not Supported 00:26:48.483 00:26:48.483 Firmware Slot Information 00:26:48.483 ========================= 00:26:48.483 Active slot: 0 00:26:48.483 00:26:48.483 00:26:48.483 Error Log 00:26:48.483 ========= 00:26:48.483 00:26:48.483 Active Namespaces 00:26:48.483 ================= 00:26:48.483 Discovery Log Page 00:26:48.483 ================== 00:26:48.483 Generation Counter: 2 00:26:48.483 Number of Records: 2 00:26:48.483 Record Format: 0 00:26:48.483 00:26:48.483 Discovery Log Entry 0 00:26:48.483 ---------------------- 00:26:48.483 Transport Type: 3 (TCP) 00:26:48.483 Address Family: 1 (IPv4) 00:26:48.483 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:48.483 Entry Flags: 00:26:48.483 Duplicate Returned Information: 0 00:26:48.483 Explicit Persistent Connection Support for Discovery: 0 00:26:48.483 Transport Requirements: 00:26:48.483 Secure Channel: Not Specified 00:26:48.483 Port ID: 1 (0x0001) 00:26:48.483 Controller ID: 65535 (0xffff) 00:26:48.483 Admin Max SQ Size: 32 00:26:48.483 Transport Service Identifier: 4420 00:26:48.483 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:48.483 Transport Address: 10.0.0.1 00:26:48.483 Discovery Log Entry 1 00:26:48.483 ---------------------- 00:26:48.483 Transport Type: 3 (TCP) 00:26:48.483 Address Family: 1 (IPv4) 00:26:48.483 Subsystem Type: 2 (NVM Subsystem) 00:26:48.483 Entry Flags: 00:26:48.483 Duplicate Returned Information: 0 00:26:48.483 Explicit Persistent Connection Support for Discovery: 0 00:26:48.483 Transport Requirements: 00:26:48.483 Secure Channel: Not Specified 00:26:48.483 Port ID: 1 (0x0001) 00:26:48.483 Controller ID: 65535 (0xffff) 00:26:48.483 Admin Max SQ Size: 32 00:26:48.483 Transport Service Identifier: 4420 00:26:48.483 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:48.483 Transport Address: 10.0.0.1 00:26:48.483 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:48.746 get_feature(0x01) failed 00:26:48.746 get_feature(0x02) failed 00:26:48.746 get_feature(0x04) failed 00:26:48.746 ===================================================== 00:26:48.746 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:48.746 ===================================================== 00:26:48.746 Controller Capabilities/Features 00:26:48.746 ================================ 00:26:48.746 Vendor ID: 0000 00:26:48.746 Subsystem Vendor ID: 0000 00:26:48.746 Serial Number: d092b91328eaf825dc05 00:26:48.746 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:48.746 Firmware Version: 6.8.9-20 00:26:48.746 Recommended Arb Burst: 6 00:26:48.746 IEEE OUI Identifier: 00 00 00 00:26:48.746 Multi-path I/O 00:26:48.746 May have multiple subsystem ports: Yes 00:26:48.746 May have multiple controllers: Yes 00:26:48.746 Associated with SR-IOV VF: No 00:26:48.746 Max Data Transfer Size: Unlimited 00:26:48.746 Max Number of Namespaces: 1024 00:26:48.746 Max Number of I/O Queues: 128 00:26:48.746 NVMe Specification Version (VS): 1.3 00:26:48.746 NVMe Specification Version (Identify): 1.3 00:26:48.746 Maximum Queue Entries: 1024 00:26:48.746 Contiguous Queues Required: No 00:26:48.746 Arbitration Mechanisms Supported 00:26:48.746 Weighted Round Robin: Not Supported 00:26:48.746 Vendor Specific: Not Supported 00:26:48.746 Reset Timeout: 7500 ms 00:26:48.746 Doorbell Stride: 4 bytes 00:26:48.746 NVM Subsystem Reset: Not Supported 00:26:48.746 Command Sets Supported 00:26:48.746 NVM Command Set: Supported 00:26:48.746 Boot Partition: Not Supported 00:26:48.746 Memory Page Size Minimum: 4096 bytes 00:26:48.746 Memory Page Size Maximum: 4096 bytes 00:26:48.746 Persistent Memory Region: Not Supported 00:26:48.746 Optional Asynchronous Events Supported 00:26:48.746 Namespace Attribute Notices: Supported 00:26:48.746 Firmware Activation Notices: Not Supported 00:26:48.746 ANA Change Notices: Supported 00:26:48.746 PLE Aggregate Log Change Notices: Not Supported 00:26:48.746 LBA Status Info Alert Notices: Not Supported 00:26:48.746 EGE Aggregate Log Change Notices: Not Supported 00:26:48.746 Normal NVM Subsystem Shutdown event: Not Supported 00:26:48.746 Zone Descriptor Change Notices: Not Supported 00:26:48.746 Discovery Log Change Notices: Not Supported 00:26:48.746 Controller Attributes 00:26:48.746 128-bit Host Identifier: Supported 00:26:48.746 Non-Operational Permissive Mode: Not Supported 00:26:48.746 NVM Sets: Not Supported 00:26:48.746 Read Recovery Levels: Not Supported 00:26:48.746 Endurance Groups: Not Supported 00:26:48.746 Predictable Latency Mode: Not Supported 00:26:48.746 Traffic Based Keep ALive: Supported 00:26:48.746 Namespace Granularity: Not Supported 00:26:48.746 SQ Associations: Not Supported 00:26:48.746 UUID List: Not Supported 00:26:48.746 Multi-Domain Subsystem: Not Supported 00:26:48.746 Fixed Capacity Management: Not Supported 00:26:48.746 Variable Capacity Management: Not Supported 00:26:48.746 Delete Endurance Group: Not Supported 00:26:48.746 Delete NVM Set: Not Supported 00:26:48.746 Extended LBA Formats Supported: Not Supported 00:26:48.746 Flexible Data Placement Supported: Not Supported 00:26:48.746 00:26:48.746 Controller Memory Buffer Support 00:26:48.746 ================================ 00:26:48.746 Supported: No 00:26:48.746 00:26:48.746 Persistent Memory Region Support 00:26:48.746 ================================ 00:26:48.746 Supported: No 00:26:48.746 00:26:48.746 Admin Command Set Attributes 00:26:48.746 ============================ 00:26:48.746 Security Send/Receive: Not Supported 00:26:48.746 Format NVM: Not Supported 00:26:48.746 Firmware Activate/Download: Not Supported 00:26:48.746 Namespace Management: Not Supported 00:26:48.746 Device Self-Test: Not Supported 00:26:48.746 Directives: Not Supported 00:26:48.746 NVMe-MI: Not Supported 00:26:48.746 Virtualization Management: Not Supported 00:26:48.746 Doorbell Buffer Config: Not Supported 00:26:48.746 Get LBA Status Capability: Not Supported 00:26:48.746 Command & Feature Lockdown Capability: Not Supported 00:26:48.746 Abort Command Limit: 4 00:26:48.746 Async Event Request Limit: 4 00:26:48.746 Number of Firmware Slots: N/A 00:26:48.746 Firmware Slot 1 Read-Only: N/A 00:26:48.746 Firmware Activation Without Reset: N/A 00:26:48.746 Multiple Update Detection Support: N/A 00:26:48.746 Firmware Update Granularity: No Information Provided 00:26:48.746 Per-Namespace SMART Log: Yes 00:26:48.746 Asymmetric Namespace Access Log Page: Supported 00:26:48.746 ANA Transition Time : 10 sec 00:26:48.746 00:26:48.746 Asymmetric Namespace Access Capabilities 00:26:48.746 ANA Optimized State : Supported 00:26:48.746 ANA Non-Optimized State : Supported 00:26:48.746 ANA Inaccessible State : Supported 00:26:48.746 ANA Persistent Loss State : Supported 00:26:48.746 ANA Change State : Supported 00:26:48.746 ANAGRPID is not changed : No 00:26:48.746 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:48.746 00:26:48.746 ANA Group Identifier Maximum : 128 00:26:48.746 Number of ANA Group Identifiers : 128 00:26:48.746 Max Number of Allowed Namespaces : 1024 00:26:48.746 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:48.746 Command Effects Log Page: Supported 00:26:48.746 Get Log Page Extended Data: Supported 00:26:48.746 Telemetry Log Pages: Not Supported 00:26:48.746 Persistent Event Log Pages: Not Supported 00:26:48.746 Supported Log Pages Log Page: May Support 00:26:48.746 Commands Supported & Effects Log Page: Not Supported 00:26:48.746 Feature Identifiers & Effects Log Page:May Support 00:26:48.746 NVMe-MI Commands & Effects Log Page: May Support 00:26:48.746 Data Area 4 for Telemetry Log: Not Supported 00:26:48.746 Error Log Page Entries Supported: 128 00:26:48.746 Keep Alive: Supported 00:26:48.746 Keep Alive Granularity: 1000 ms 00:26:48.746 00:26:48.746 NVM Command Set Attributes 00:26:48.746 ========================== 00:26:48.746 Submission Queue Entry Size 00:26:48.746 Max: 64 00:26:48.746 Min: 64 00:26:48.746 Completion Queue Entry Size 00:26:48.746 Max: 16 00:26:48.746 Min: 16 00:26:48.746 Number of Namespaces: 1024 00:26:48.746 Compare Command: Not Supported 00:26:48.746 Write Uncorrectable Command: Not Supported 00:26:48.746 Dataset Management Command: Supported 00:26:48.746 Write Zeroes Command: Supported 00:26:48.746 Set Features Save Field: Not Supported 00:26:48.746 Reservations: Not Supported 00:26:48.746 Timestamp: Not Supported 00:26:48.746 Copy: Not Supported 00:26:48.746 Volatile Write Cache: Present 00:26:48.746 Atomic Write Unit (Normal): 1 00:26:48.746 Atomic Write Unit (PFail): 1 00:26:48.746 Atomic Compare & Write Unit: 1 00:26:48.746 Fused Compare & Write: Not Supported 00:26:48.746 Scatter-Gather List 00:26:48.746 SGL Command Set: Supported 00:26:48.746 SGL Keyed: Not Supported 00:26:48.746 SGL Bit Bucket Descriptor: Not Supported 00:26:48.746 SGL Metadata Pointer: Not Supported 00:26:48.746 Oversized SGL: Not Supported 00:26:48.746 SGL Metadata Address: Not Supported 00:26:48.746 SGL Offset: Supported 00:26:48.746 Transport SGL Data Block: Not Supported 00:26:48.746 Replay Protected Memory Block: Not Supported 00:26:48.746 00:26:48.746 Firmware Slot Information 00:26:48.746 ========================= 00:26:48.746 Active slot: 0 00:26:48.746 00:26:48.746 Asymmetric Namespace Access 00:26:48.746 =========================== 00:26:48.746 Change Count : 0 00:26:48.746 Number of ANA Group Descriptors : 1 00:26:48.746 ANA Group Descriptor : 0 00:26:48.747 ANA Group ID : 1 00:26:48.747 Number of NSID Values : 1 00:26:48.747 Change Count : 0 00:26:48.747 ANA State : 1 00:26:48.747 Namespace Identifier : 1 00:26:48.747 00:26:48.747 Commands Supported and Effects 00:26:48.747 ============================== 00:26:48.747 Admin Commands 00:26:48.747 -------------- 00:26:48.747 Get Log Page (02h): Supported 00:26:48.747 Identify (06h): Supported 00:26:48.747 Abort (08h): Supported 00:26:48.747 Set Features (09h): Supported 00:26:48.747 Get Features (0Ah): Supported 00:26:48.747 Asynchronous Event Request (0Ch): Supported 00:26:48.747 Keep Alive (18h): Supported 00:26:48.747 I/O Commands 00:26:48.747 ------------ 00:26:48.747 Flush (00h): Supported 00:26:48.747 Write (01h): Supported LBA-Change 00:26:48.747 Read (02h): Supported 00:26:48.747 Write Zeroes (08h): Supported LBA-Change 00:26:48.747 Dataset Management (09h): Supported 00:26:48.747 00:26:48.747 Error Log 00:26:48.747 ========= 00:26:48.747 Entry: 0 00:26:48.747 Error Count: 0x3 00:26:48.747 Submission Queue Id: 0x0 00:26:48.747 Command Id: 0x5 00:26:48.747 Phase Bit: 0 00:26:48.747 Status Code: 0x2 00:26:48.747 Status Code Type: 0x0 00:26:48.747 Do Not Retry: 1 00:26:48.747 Error Location: 0x28 00:26:48.747 LBA: 0x0 00:26:48.747 Namespace: 0x0 00:26:48.747 Vendor Log Page: 0x0 00:26:48.747 ----------- 00:26:48.747 Entry: 1 00:26:48.747 Error Count: 0x2 00:26:48.747 Submission Queue Id: 0x0 00:26:48.747 Command Id: 0x5 00:26:48.747 Phase Bit: 0 00:26:48.747 Status Code: 0x2 00:26:48.747 Status Code Type: 0x0 00:26:48.747 Do Not Retry: 1 00:26:48.747 Error Location: 0x28 00:26:48.747 LBA: 0x0 00:26:48.747 Namespace: 0x0 00:26:48.747 Vendor Log Page: 0x0 00:26:48.747 ----------- 00:26:48.747 Entry: 2 00:26:48.747 Error Count: 0x1 00:26:48.747 Submission Queue Id: 0x0 00:26:48.747 Command Id: 0x4 00:26:48.747 Phase Bit: 0 00:26:48.747 Status Code: 0x2 00:26:48.747 Status Code Type: 0x0 00:26:48.747 Do Not Retry: 1 00:26:48.747 Error Location: 0x28 00:26:48.747 LBA: 0x0 00:26:48.747 Namespace: 0x0 00:26:48.747 Vendor Log Page: 0x0 00:26:48.747 00:26:48.747 Number of Queues 00:26:48.747 ================ 00:26:48.747 Number of I/O Submission Queues: 128 00:26:48.747 Number of I/O Completion Queues: 128 00:26:48.747 00:26:48.747 ZNS Specific Controller Data 00:26:48.747 ============================ 00:26:48.747 Zone Append Size Limit: 0 00:26:48.747 00:26:48.747 00:26:48.747 Active Namespaces 00:26:48.747 ================= 00:26:48.747 get_feature(0x05) failed 00:26:48.747 Namespace ID:1 00:26:48.747 Command Set Identifier: NVM (00h) 00:26:48.747 Deallocate: Supported 00:26:48.747 Deallocated/Unwritten Error: Not Supported 00:26:48.747 Deallocated Read Value: Unknown 00:26:48.747 Deallocate in Write Zeroes: Not Supported 00:26:48.747 Deallocated Guard Field: 0xFFFF 00:26:48.747 Flush: Supported 00:26:48.747 Reservation: Not Supported 00:26:48.747 Namespace Sharing Capabilities: Multiple Controllers 00:26:48.747 Size (in LBAs): 3750748848 (1788GiB) 00:26:48.747 Capacity (in LBAs): 3750748848 (1788GiB) 00:26:48.747 Utilization (in LBAs): 3750748848 (1788GiB) 00:26:48.747 UUID: bacae650-d272-41a0-91c4-b8374d413ecc 00:26:48.747 Thin Provisioning: Not Supported 00:26:48.747 Per-NS Atomic Units: Yes 00:26:48.747 Atomic Write Unit (Normal): 8 00:26:48.747 Atomic Write Unit (PFail): 8 00:26:48.747 Preferred Write Granularity: 8 00:26:48.747 Atomic Compare & Write Unit: 8 00:26:48.747 Atomic Boundary Size (Normal): 0 00:26:48.747 Atomic Boundary Size (PFail): 0 00:26:48.747 Atomic Boundary Offset: 0 00:26:48.747 NGUID/EUI64 Never Reused: No 00:26:48.747 ANA group ID: 1 00:26:48.747 Namespace Write Protected: No 00:26:48.747 Number of LBA Formats: 1 00:26:48.747 Current LBA Format: LBA Format #00 00:26:48.747 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:48.747 00:26:48.747 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:48.747 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:48.747 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # sync 00:26:48.747 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:26:48.747 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # set +e 00:26:48.747 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # for i in {1..20} 00:26:48.747 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:26:48.747 rmmod nvme_tcp 00:26:48.747 rmmod nvme_fabrics 00:26:48.747 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:26:48.747 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # set -e 00:26:48.747 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@130 -- # return 0 00:26:48.747 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:26:48.747 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:48.747 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:48.747 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:48.747 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # iptr 00:26:48.747 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:26:48.747 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:48.747 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:26:48.747 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:48.747 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # remove_spdk_ns 00:26:48.747 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.747 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:48.747 12:01:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.298 12:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:26:51.298 12:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:51.298 12:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:51.298 12:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:26:51.298 12:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:51.298 12:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:51.298 12:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:51.298 12:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:51.298 12:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:26:51.298 12:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:26:51.298 12:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:54.605 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:54.605 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:54.605 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:54.605 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:54.605 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:54.605 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:54.605 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:54.605 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:54.605 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:54.605 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:54.605 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:54.605 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:54.605 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:54.605 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:54.605 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:54.605 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:54.605 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:26:54.867 00:26:54.867 real 0m19.533s 00:26:54.867 user 0m5.351s 00:26:54.867 sys 0m11.176s 00:26:54.867 12:02:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:54.867 12:02:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:54.867 ************************************ 00:26:54.867 END TEST nvmf_identify_kernel_target 00:26:54.867 ************************************ 00:26:54.867 12:02:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:54.867 12:02:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:54.867 12:02:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:54.867 12:02:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.867 ************************************ 00:26:54.867 START TEST nvmf_auth_host 00:26:54.867 ************************************ 00:26:54.867 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:55.129 * Looking for test storage... 00:26:55.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:55.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.129 --rc genhtml_branch_coverage=1 00:26:55.129 --rc genhtml_function_coverage=1 00:26:55.129 --rc genhtml_legend=1 00:26:55.129 --rc geninfo_all_blocks=1 00:26:55.129 --rc geninfo_unexecuted_blocks=1 00:26:55.129 00:26:55.129 ' 00:26:55.129 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:55.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.129 --rc genhtml_branch_coverage=1 00:26:55.129 --rc genhtml_function_coverage=1 00:26:55.130 --rc genhtml_legend=1 00:26:55.130 --rc geninfo_all_blocks=1 00:26:55.130 --rc geninfo_unexecuted_blocks=1 00:26:55.130 00:26:55.130 ' 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:55.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.130 --rc genhtml_branch_coverage=1 00:26:55.130 --rc genhtml_function_coverage=1 00:26:55.130 --rc genhtml_legend=1 00:26:55.130 --rc geninfo_all_blocks=1 00:26:55.130 --rc geninfo_unexecuted_blocks=1 00:26:55.130 00:26:55.130 ' 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:55.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.130 --rc genhtml_branch_coverage=1 00:26:55.130 --rc genhtml_function_coverage=1 00:26:55.130 --rc genhtml_legend=1 00:26:55.130 --rc geninfo_all_blocks=1 00:26:55.130 --rc geninfo_unexecuted_blocks=1 00:26:55.130 00:26:55.130 ' 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # : 0 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:26:55.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@56 -- # have_pci_nics=0 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # xtrace_disable 00:26:55.130 12:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_devs=() 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_devs 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_net_devs=() 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # pci_drivers=() 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # local -A pci_drivers 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # net_devs=() 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga net_devs 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # e810=() 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga e810 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # x722=() 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga x722 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@323 -- # mlx=() 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@323 -- # local -ga mlx 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:27:03.275 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:03.276 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:03.276 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:03.276 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:03.276 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # is_hw=yes 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:27:03.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:03.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:27:03.276 00:27:03.276 --- 10.0.0.2 ping statistics --- 00:27:03.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.276 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:03.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:03.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:27:03.276 00:27:03.276 --- 10.0.0.1 ping statistics --- 00:27:03.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.276 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # return 0 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=198637 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 198637 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 198637 ']' 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.276 12:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:03.537 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:03.537 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:03.537 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:03.537 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:03.537 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.537 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:03.537 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:03.537 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:03.537 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:27:03.537 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:03.537 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:27:03.537 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:27:03.537 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:27:03.537 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:03.537 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=354661113a8edfe87c72ef4d4d8c7911 00:27:03.537 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:27:03.537 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.rk4 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 354661113a8edfe87c72ef4d4d8c7911 0 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 354661113a8edfe87c72ef4d4d8c7911 0 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=354661113a8edfe87c72ef4d4d8c7911 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.rk4 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.rk4 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.rk4 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=30f2c5959f81b5bf1a32d5a43481ebd9951786aa02457f9e1f17b47bdf8dfc0c 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.10f 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 30f2c5959f81b5bf1a32d5a43481ebd9951786aa02457f9e1f17b47bdf8dfc0c 3 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 30f2c5959f81b5bf1a32d5a43481ebd9951786aa02457f9e1f17b47bdf8dfc0c 3 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=30f2c5959f81b5bf1a32d5a43481ebd9951786aa02457f9e1f17b47bdf8dfc0c 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:27:03.538 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.10f 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.10f 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.10f 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=7fee8ee914e6f07c5fac52c5206a15df2fc7cc7080657238 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.HUg 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 7fee8ee914e6f07c5fac52c5206a15df2fc7cc7080657238 0 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 7fee8ee914e6f07c5fac52c5206a15df2fc7cc7080657238 0 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=7fee8ee914e6f07c5fac52c5206a15df2fc7cc7080657238 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.HUg 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.HUg 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.HUg 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=235412ef8f33646fd68a69f7f97dd10bdc27dd4c17c465ab 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.eoL 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 235412ef8f33646fd68a69f7f97dd10bdc27dd4c17c465ab 2 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 235412ef8f33646fd68a69f7f97dd10bdc27dd4c17c465ab 2 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=235412ef8f33646fd68a69f7f97dd10bdc27dd4c17c465ab 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.eoL 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.eoL 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.eoL 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=177bbce685d837a24e26006930dc6afd 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.aaS 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 177bbce685d837a24e26006930dc6afd 1 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 177bbce685d837a24e26006930dc6afd 1 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=177bbce685d837a24e26006930dc6afd 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.aaS 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.aaS 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.aaS 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=ce81a2cd2e3d961e30ec463cb68613d1 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.NBQ 00:27:03.800 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key ce81a2cd2e3d961e30ec463cb68613d1 1 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 ce81a2cd2e3d961e30ec463cb68613d1 1 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=ce81a2cd2e3d961e30ec463cb68613d1 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.NBQ 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.NBQ 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.NBQ 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=cd9de8b5ae565544bf2020d86e944d2e3fe40f47e6e1d58c 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.gpP 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key cd9de8b5ae565544bf2020d86e944d2e3fe40f47e6e1d58c 2 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 cd9de8b5ae565544bf2020d86e944d2e3fe40f47e6e1d58c 2 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=cd9de8b5ae565544bf2020d86e944d2e3fe40f47e6e1d58c 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:27:04.063 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.gpP 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.gpP 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.gpP 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=0b3b7f0efd4988a4d7427838aff55d55 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.DPX 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 0b3b7f0efd4988a4d7427838aff55d55 0 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 0b3b7f0efd4988a4d7427838aff55d55 0 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=0b3b7f0efd4988a4d7427838aff55d55 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.DPX 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.DPX 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.DPX 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=ce2fd0c398be7d735a11904996c460ce78c9bb4a2defe00ec7de8e5bc4157632 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.ecD 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key ce2fd0c398be7d735a11904996c460ce78c9bb4a2defe00ec7de8e5bc4157632 3 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 ce2fd0c398be7d735a11904996c460ce78c9bb4a2defe00ec7de8e5bc4157632 3 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=ce2fd0c398be7d735a11904996c460ce78c9bb4a2defe00ec7de8e5bc4157632 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.ecD 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.ecD 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ecD 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 198637 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 198637 ']' 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:04.064 12:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.326 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:04.326 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:04.326 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rk4 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.10f ]] 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.10f 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.HUg 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.eoL ]] 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eoL 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.aaS 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.NBQ ]] 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NBQ 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.gpP 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.DPX ]] 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.DPX 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ecD 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:27:04.327 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:27:04.589 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:04.589 12:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:07.895 Waiting for block devices as requested 00:27:07.895 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:07.895 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:07.895 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:07.895 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:07.895 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:08.156 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:08.156 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:08.156 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:08.416 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:08.416 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:08.416 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:08.677 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:08.677 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:08.677 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:08.938 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:08.938 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:08.938 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:09.881 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:27:09.881 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:09.881 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:27:09.881 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:09.881 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:09.881 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:09.881 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:27:09.881 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:09.881 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:09.881 No valid GPT data, bailing 00:27:09.881 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:09.881 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:09.881 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:09.881 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:27:09.881 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:27:09.881 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:09.881 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:09.881 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:09.881 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:09.881 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:27:09.881 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:27:09.881 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:27:09.882 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:27:09.882 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:27:09.882 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:27:09.882 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:27:09.882 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:09.882 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:10.142 00:27:10.143 Discovery Log Number of Records 2, Generation counter 2 00:27:10.143 =====Discovery Log Entry 0====== 00:27:10.143 trtype: tcp 00:27:10.143 adrfam: ipv4 00:27:10.143 subtype: current discovery subsystem 00:27:10.143 treq: not specified, sq flow control disable supported 00:27:10.143 portid: 1 00:27:10.143 trsvcid: 4420 00:27:10.143 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:10.143 traddr: 10.0.0.1 00:27:10.143 eflags: none 00:27:10.143 sectype: none 00:27:10.143 =====Discovery Log Entry 1====== 00:27:10.143 trtype: tcp 00:27:10.143 adrfam: ipv4 00:27:10.143 subtype: nvme subsystem 00:27:10.143 treq: not specified, sq flow control disable supported 00:27:10.143 portid: 1 00:27:10.143 trsvcid: 4420 00:27:10.143 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:10.143 traddr: 10.0.0.1 00:27:10.143 eflags: none 00:27:10.143 sectype: none 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: ]] 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.143 nvme0n1 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.143 12:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.143 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.143 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.143 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.143 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: ]] 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.404 nvme0n1 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.404 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:10.405 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:10.405 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: ]] 00:27:10.405 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:10.405 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:10.405 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.405 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.405 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:10.405 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:10.405 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.405 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:10.405 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.405 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.666 nvme0n1 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.666 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: ]] 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.667 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.928 nvme0n1 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: ]] 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:10.928 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:10.929 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.929 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:10.929 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.929 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.929 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.929 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.929 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:10.929 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:10.929 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:10.929 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.929 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.929 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:10.929 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.929 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:10.929 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:10.929 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:10.929 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:10.929 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.929 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.190 nvme0n1 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.190 12:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.451 nvme0n1 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: ]] 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.451 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.712 nvme0n1 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: ]] 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.712 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.972 nvme0n1 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: ]] 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.972 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.233 nvme0n1 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: ]] 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.233 12:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.233 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.233 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.233 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:12.233 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:12.233 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:12.233 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.233 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.233 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:12.233 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.233 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:12.233 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:12.233 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:12.233 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:12.233 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.233 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.494 nvme0n1 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.494 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.755 nvme0n1 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: ]] 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.755 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.016 nvme0n1 00:27:13.016 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.016 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.016 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.016 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: ]] 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.017 12:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.277 nvme0n1 00:27:13.277 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.277 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.277 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.277 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.277 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.277 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: ]] 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.538 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.798 nvme0n1 00:27:13.798 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.798 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.798 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.798 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.798 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.798 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.798 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: ]] 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.799 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.060 nvme0n1 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.060 12:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.322 nvme0n1 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: ]] 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.322 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.893 nvme0n1 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: ]] 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.893 12:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.464 nvme0n1 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: ]] 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:15.464 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.465 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.465 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.465 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.465 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:15.465 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:15.465 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:15.465 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.465 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.465 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:15.465 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.465 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:15.465 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:15.465 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:15.465 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:15.465 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.465 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.036 nvme0n1 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: ]] 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.036 12:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.296 nvme0n1 00:27:16.296 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.296 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.296 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.296 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.296 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.296 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.296 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:16.297 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.557 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.557 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.557 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.557 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:16.557 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:16.557 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:16.557 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.557 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.557 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:16.557 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.557 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:16.557 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:16.557 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:16.557 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:16.557 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.557 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.818 nvme0n1 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: ]] 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.818 12:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.768 nvme0n1 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: ]] 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.768 12:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.340 nvme0n1 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: ]] 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.340 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.911 nvme0n1 00:27:18.911 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.911 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.911 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.911 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.911 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.911 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: ]] 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.172 12:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.743 nvme0n1 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.743 12:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.686 nvme0n1 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: ]] 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.686 nvme0n1 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: ]] 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.686 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.687 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.687 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:20.687 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:20.687 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:20.687 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.687 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.687 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:20.687 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.687 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:20.687 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:20.687 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:20.687 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.687 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.687 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.947 nvme0n1 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: ]] 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.947 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.208 nvme0n1 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: ]] 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.208 12:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.208 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.208 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.208 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:21.208 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:21.208 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:21.208 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.208 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.208 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:21.208 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.208 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:21.208 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:21.208 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:21.208 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:21.208 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.208 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.469 nvme0n1 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.469 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.730 nvme0n1 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: ]] 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:21.730 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:21.731 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:21.731 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.731 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.991 nvme0n1 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: ]] 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.991 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.992 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:21.992 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.992 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:21.992 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.992 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.992 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.992 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.992 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:21.992 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:21.992 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:21.992 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.992 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.992 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:21.992 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.992 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:21.992 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:21.992 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:21.992 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:21.992 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.992 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.252 nvme0n1 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: ]] 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.252 12:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.513 nvme0n1 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: ]] 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.513 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.774 nvme0n1 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:22.774 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:22.775 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.775 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.775 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:22.775 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.775 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:22.775 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:22.775 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:22.775 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:22.775 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.775 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.036 nvme0n1 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: ]] 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.036 12:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.297 nvme0n1 00:27:23.297 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.297 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.297 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.297 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.297 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.297 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.297 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.297 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.297 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: ]] 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.298 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.559 nvme0n1 00:27:23.559 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.559 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.559 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.559 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.559 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.559 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.559 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.559 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.559 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.559 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.559 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.559 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: ]] 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.820 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.080 nvme0n1 00:27:24.080 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.080 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.080 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.080 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.080 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: ]] 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.081 12:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.342 nvme0n1 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:24.342 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:24.343 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.343 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:24.343 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.343 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.343 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.343 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.343 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:24.343 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:24.343 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:24.343 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.343 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.343 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:24.343 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.343 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:24.343 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:24.343 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:24.343 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:24.343 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.343 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.604 nvme0n1 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: ]] 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.604 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.865 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.865 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:24.865 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:24.865 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:24.865 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.865 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.865 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:24.865 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.865 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:24.865 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:24.865 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:24.866 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:24.866 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.866 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.127 nvme0n1 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: ]] 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:25.127 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.128 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.128 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:25.128 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.128 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:25.128 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:25.128 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:25.128 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:25.128 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.128 12:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.701 nvme0n1 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: ]] 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.701 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.274 nvme0n1 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: ]] 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.274 12:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.535 nvme0n1 00:27:26.535 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.535 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.535 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.535 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.535 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.535 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:26.796 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:26.797 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:26.797 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.797 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.797 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:26.797 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.797 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:26.797 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:26.797 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:26.797 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:26.797 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.797 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.057 nvme0n1 00:27:27.057 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.057 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.057 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.057 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.057 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.057 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: ]] 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.321 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:27.322 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.322 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:27.322 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.322 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.322 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.322 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.322 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:27.322 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:27.322 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:27.322 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.322 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.322 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:27.322 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.322 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:27.322 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:27.322 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:27.322 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:27.322 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.322 12:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.893 nvme0n1 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: ]] 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:27.893 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:27.894 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.894 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.894 12:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.836 nvme0n1 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: ]] 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.836 12:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.408 nvme0n1 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: ]] 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.408 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.980 nvme0n1 00:27:29.980 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.980 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.980 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.980 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.980 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.980 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.980 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.980 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.980 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.980 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.980 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.980 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.241 12:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.813 nvme0n1 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: ]] 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.813 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.074 nvme0n1 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: ]] 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.074 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.335 nvme0n1 00:27:31.335 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.335 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.335 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.335 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.335 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.335 12:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: ]] 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.335 nvme0n1 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.335 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: ]] 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.596 nvme0n1 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.596 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.597 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.857 nvme0n1 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.857 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.858 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.858 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.858 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.858 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.858 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.858 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.858 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.858 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.858 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:31.858 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.858 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.858 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.858 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.858 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:31.858 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: ]] 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.119 nvme0n1 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.119 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.120 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.120 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.120 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.120 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.120 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.120 12:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.120 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.120 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.120 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:32.120 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: ]] 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.380 nvme0n1 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.380 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: ]] 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.641 nvme0n1 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.641 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: ]] 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:32.902 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:32.903 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:32.903 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.903 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.903 nvme0n1 00:27:32.903 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.903 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.903 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.903 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.903 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.903 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.903 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.903 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.903 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.903 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.163 nvme0n1 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.163 12:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.163 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.163 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.163 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.163 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.163 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.163 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.423 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.423 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:33.423 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: ]] 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.424 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.684 nvme0n1 00:27:33.684 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.684 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.684 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.684 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.684 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: ]] 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.685 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.945 nvme0n1 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: ]] 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.945 12:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.206 nvme0n1 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: ]] 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.206 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.514 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.514 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:34.514 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.514 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.514 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.514 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.514 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:34.514 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:34.514 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:34.514 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.514 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.514 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:34.514 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.514 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:34.514 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:34.514 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:34.514 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.514 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.514 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.514 nvme0n1 00:27:34.514 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.514 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.514 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.514 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.515 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.515 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.791 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.791 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.791 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.791 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.791 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.791 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.791 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:34.791 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.791 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:34.791 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.791 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.792 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.106 nvme0n1 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: ]] 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.106 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.107 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:35.107 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:35.107 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:35.107 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.107 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.107 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:35.107 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.107 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:35.107 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:35.107 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:35.107 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.107 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.107 12:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.376 nvme0n1 00:27:35.376 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.376 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.376 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.376 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.376 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.376 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.376 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.376 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.376 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.376 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.636 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.636 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.636 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:35.636 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.636 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.636 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.636 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.636 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:35.636 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:35.636 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.636 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.636 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:35.636 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: ]] 00:27:35.636 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:35.636 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:35.636 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.636 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.636 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.636 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.637 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.637 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:35.637 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.637 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.637 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.637 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.637 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:35.637 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:35.637 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:35.637 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.637 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.637 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:35.637 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.637 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:35.637 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:35.637 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:35.637 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.637 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.637 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.897 nvme0n1 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: ]] 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.897 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:36.157 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:36.157 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:36.157 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.157 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.157 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:36.157 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.157 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:36.157 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:36.157 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:36.157 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:36.157 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.157 12:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.418 nvme0n1 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: ]] 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.418 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.988 nvme0n1 00:27:36.988 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.988 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.988 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.988 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.988 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.988 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.988 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.988 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.988 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.988 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.988 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.988 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.988 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:36.988 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.988 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.988 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.988 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:36.988 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:36.988 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.989 12:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.559 nvme0n1 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzU0NjYxMTEzYThlZGZlODdjNzJlZjRkNGQ4Yzc5MTEwp8De: 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: ]] 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzBmMmM1OTU5ZjgxYjViZjFhMzJkNWE0MzQ4MWViZDk5NTE3ODZhYTAyNDU3ZjllMWYxN2I0N2JkZjhkZmMwY3jurrU=: 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.559 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:37.560 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:37.560 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:37.560 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.560 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.560 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:37.560 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.560 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:37.560 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:37.560 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:37.560 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:37.560 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.560 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.130 nvme0n1 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: ]] 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.130 12:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.130 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.130 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.130 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:38.130 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:38.130 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:38.130 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.130 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.130 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:38.130 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.130 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:38.130 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:38.391 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:38.391 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.391 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.391 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.962 nvme0n1 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: ]] 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.962 12:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.537 nvme0n1 00:27:39.537 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.537 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.537 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.537 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.537 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.537 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2Q5ZGU4YjVhZTU2NTU0NGJmMjAyMGQ4NmU5NDRkMmUzZmU0MGY0N2U2ZTFkNThjZQmCtw==: 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: ]] 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGIzYjdmMGVmZDQ5ODhhNGQ3NDI3ODM4YWZmNTVkNTWJAQu6: 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.798 12:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.368 nvme0n1 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyZmQwYzM5OGJlN2Q3MzVhMTE5MDQ5OTZjNDYwY2U3OGM5YmI0YTJkZWZlMDBlYzdkZThlNWJjNDE1NzYzMkk369I=: 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.368 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.310 nvme0n1 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: ]] 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.310 request: 00:27:41.310 { 00:27:41.310 "name": "nvme0", 00:27:41.310 "trtype": "tcp", 00:27:41.310 "traddr": "10.0.0.1", 00:27:41.310 "adrfam": "ipv4", 00:27:41.310 "trsvcid": "4420", 00:27:41.310 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:41.310 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:41.310 "prchk_reftag": false, 00:27:41.310 "prchk_guard": false, 00:27:41.310 "hdgst": false, 00:27:41.310 "ddgst": false, 00:27:41.310 "allow_unrecognized_csi": false, 00:27:41.310 "method": "bdev_nvme_attach_controller", 00:27:41.310 "req_id": 1 00:27:41.310 } 00:27:41.310 Got JSON-RPC error response 00:27:41.310 response: 00:27:41.310 { 00:27:41.310 "code": -5, 00:27:41.310 "message": "Input/output error" 00:27:41.310 } 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.310 12:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.310 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:41.310 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:41.310 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:41.310 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:41.310 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:41.310 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.310 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.310 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:41.310 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.310 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:41.310 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:41.310 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:41.310 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:41.310 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:41.310 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:41.310 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:41.310 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:41.310 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:41.310 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:41.310 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:41.310 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.310 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.310 request: 00:27:41.310 { 00:27:41.310 "name": "nvme0", 00:27:41.310 "trtype": "tcp", 00:27:41.310 "traddr": "10.0.0.1", 00:27:41.310 "adrfam": "ipv4", 00:27:41.310 "trsvcid": "4420", 00:27:41.310 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:41.310 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:41.311 "prchk_reftag": false, 00:27:41.311 "prchk_guard": false, 00:27:41.311 "hdgst": false, 00:27:41.311 "ddgst": false, 00:27:41.311 "dhchap_key": "key2", 00:27:41.311 "allow_unrecognized_csi": false, 00:27:41.311 "method": "bdev_nvme_attach_controller", 00:27:41.311 "req_id": 1 00:27:41.311 } 00:27:41.311 Got JSON-RPC error response 00:27:41.311 response: 00:27:41.311 { 00:27:41.311 "code": -5, 00:27:41.311 "message": "Input/output error" 00:27:41.311 } 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.311 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.571 request: 00:27:41.571 { 00:27:41.571 "name": "nvme0", 00:27:41.571 "trtype": "tcp", 00:27:41.571 "traddr": "10.0.0.1", 00:27:41.571 "adrfam": "ipv4", 00:27:41.571 "trsvcid": "4420", 00:27:41.571 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:41.571 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:41.571 "prchk_reftag": false, 00:27:41.571 "prchk_guard": false, 00:27:41.571 "hdgst": false, 00:27:41.571 "ddgst": false, 00:27:41.571 "dhchap_key": "key1", 00:27:41.571 "dhchap_ctrlr_key": "ckey2", 00:27:41.571 "allow_unrecognized_csi": false, 00:27:41.571 "method": "bdev_nvme_attach_controller", 00:27:41.571 "req_id": 1 00:27:41.571 } 00:27:41.571 Got JSON-RPC error response 00:27:41.571 response: 00:27:41.571 { 00:27:41.571 "code": -5, 00:27:41.571 "message": "Input/output error" 00:27:41.571 } 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.571 nvme0n1 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: ]] 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.571 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.831 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.831 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:41.831 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.831 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.831 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.831 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.831 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:41.831 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:41.831 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:41.831 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:41.831 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:41.831 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:41.831 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:41.831 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:41.831 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.831 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.831 request: 00:27:41.831 { 00:27:41.831 "name": "nvme0", 00:27:41.831 "dhchap_key": "key1", 00:27:41.831 "dhchap_ctrlr_key": "ckey2", 00:27:41.831 "method": "bdev_nvme_set_keys", 00:27:41.831 "req_id": 1 00:27:41.831 } 00:27:41.831 Got JSON-RPC error response 00:27:41.831 response: 00:27:41.831 { 00:27:41.831 "code": -13, 00:27:41.831 "message": "Permission denied" 00:27:41.831 } 00:27:41.831 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:41.831 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:41.831 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:41.831 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:41.831 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:41.832 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.832 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:41.832 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.832 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.832 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.832 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:41.832 12:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:42.771 12:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.771 12:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:42.771 12:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.771 12:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.771 12:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.031 12:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:43.031 12:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ZlZThlZTkxNGU2ZjA3YzVmYWM1MmM1MjA2YTE1ZGYyZmM3Y2M3MDgwNjU3MjM4c6zzJw==: 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: ]] 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM1NDEyZWY4ZjMzNjQ2ZmQ2OGE2OWY3Zjk3ZGQxMGJkYzI3ZGQ0YzE3YzQ2NWFi4oPS6A==: 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.972 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.233 nvme0n1 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTc3YmJjZTY4NWQ4MzdhMjRlMjYwMDY5MzBkYzZhZmSLi842: 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: ]] 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2U4MWEyY2QyZTNkOTYxZTMwZWM0NjNjYjY4NjEzZDHWGFk3: 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.233 request: 00:27:44.233 { 00:27:44.233 "name": "nvme0", 00:27:44.233 "dhchap_key": "key2", 00:27:44.233 "dhchap_ctrlr_key": "ckey1", 00:27:44.233 "method": "bdev_nvme_set_keys", 00:27:44.233 "req_id": 1 00:27:44.233 } 00:27:44.233 Got JSON-RPC error response 00:27:44.233 response: 00:27:44.233 { 00:27:44.233 "code": -13, 00:27:44.233 "message": "Permission denied" 00:27:44.233 } 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.233 12:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:44.233 12:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:45.173 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.173 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:45.173 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.173 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.173 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.173 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:45.173 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:45.173 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:45.173 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:45.173 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:45.173 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # sync 00:27:45.173 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:27:45.173 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # set +e 00:27:45.173 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # for i in {1..20} 00:27:45.173 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:27:45.433 rmmod nvme_tcp 00:27:45.433 rmmod nvme_fabrics 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # set -e 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@130 -- # return 0 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 198637 ']' 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 198637 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 198637 ']' 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 198637 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 198637 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 198637' 00:27:45.434 killing process with pid 198637 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 198637 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 198637 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # iptr 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # remove_spdk_ns 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:45.434 12:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.986 12:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:27:47.986 12:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:47.986 12:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:47.986 12:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:47.986 12:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:47.986 12:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:27:47.986 12:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:47.986 12:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:47.986 12:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:47.986 12:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:47.987 12:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:27:47.987 12:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:27:47.987 12:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:51.286 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:51.286 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:51.286 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:51.286 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:51.286 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:51.286 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:51.286 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:51.286 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:51.286 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:51.286 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:51.286 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:51.286 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:51.286 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:51.286 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:51.286 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:51.286 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:51.286 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:51.547 12:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.rk4 /tmp/spdk.key-null.HUg /tmp/spdk.key-sha256.aaS /tmp/spdk.key-sha384.gpP /tmp/spdk.key-sha512.ecD /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:51.547 12:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:55.752 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:55.752 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:55.752 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:55.752 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:55.752 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:55.752 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:55.752 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:55.752 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:55.752 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:55.752 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:27:55.752 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:55.752 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:55.752 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:55.752 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:55.752 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:55.752 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:55.752 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:55.752 00:27:55.752 real 1m0.465s 00:27:55.752 user 0m54.075s 00:27:55.752 sys 0m15.922s 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.752 ************************************ 00:27:55.752 END TEST nvmf_auth_host 00:27:55.752 ************************************ 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.752 ************************************ 00:27:55.752 START TEST nvmf_digest 00:27:55.752 ************************************ 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:55.752 * Looking for test storage... 00:27:55.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:55.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.752 --rc genhtml_branch_coverage=1 00:27:55.752 --rc genhtml_function_coverage=1 00:27:55.752 --rc genhtml_legend=1 00:27:55.752 --rc geninfo_all_blocks=1 00:27:55.752 --rc geninfo_unexecuted_blocks=1 00:27:55.752 00:27:55.752 ' 00:27:55.752 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:55.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.752 --rc genhtml_branch_coverage=1 00:27:55.753 --rc genhtml_function_coverage=1 00:27:55.753 --rc genhtml_legend=1 00:27:55.753 --rc geninfo_all_blocks=1 00:27:55.753 --rc geninfo_unexecuted_blocks=1 00:27:55.753 00:27:55.753 ' 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:55.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.753 --rc genhtml_branch_coverage=1 00:27:55.753 --rc genhtml_function_coverage=1 00:27:55.753 --rc genhtml_legend=1 00:27:55.753 --rc geninfo_all_blocks=1 00:27:55.753 --rc geninfo_unexecuted_blocks=1 00:27:55.753 00:27:55.753 ' 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:55.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.753 --rc genhtml_branch_coverage=1 00:27:55.753 --rc genhtml_function_coverage=1 00:27:55.753 --rc genhtml_legend=1 00:27:55.753 --rc geninfo_all_blocks=1 00:27:55.753 --rc geninfo_unexecuted_blocks=1 00:27:55.753 00:27:55.753 ' 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # : 0 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:27:55.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@56 -- # have_pci_nics=0 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # xtrace_disable 00:27:55.753 12:03:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_devs=() 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_devs 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_net_devs=() 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # pci_drivers=() 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # local -A pci_drivers 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # net_devs=() 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga net_devs 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # e810=() 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga e810 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # x722=() 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga x722 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@323 -- # mlx=() 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@323 -- # local -ga mlx 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:03.897 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:03.898 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:03.898 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:03.898 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:03.898 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # is_hw=yes 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:03.898 12:03:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:28:03.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:03.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:28:03.898 00:28:03.898 --- 10.0.0.2 ping statistics --- 00:28:03.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.898 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:03.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:03.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:28:03.898 00:28:03.898 --- 10.0.0.1 ping statistics --- 00:28:03.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.898 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # return 0 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:03.898 ************************************ 00:28:03.898 START TEST nvmf_digest_clean 00:28:03.898 ************************************ 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=216180 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 216180 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 216180 ']' 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:03.898 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:03.899 [2024-12-09 12:03:11.165322] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:28:03.899 [2024-12-09 12:03:11.165388] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:03.899 [2024-12-09 12:03:11.263621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.899 [2024-12-09 12:03:11.313913] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:03.899 [2024-12-09 12:03:11.313963] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:03.899 [2024-12-09 12:03:11.313972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:03.899 [2024-12-09 12:03:11.313979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:03.899 [2024-12-09 12:03:11.313986] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:03.899 [2024-12-09 12:03:11.314762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.159 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:04.159 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:04.159 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:04.159 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:04.159 12:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:04.159 12:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:04.159 12:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:04.159 12:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:04.160 12:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:04.160 12:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.160 12:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:04.420 null0 00:28:04.420 [2024-12-09 12:03:12.123464] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.420 [2024-12-09 12:03:12.147776] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:04.420 12:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.420 12:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:04.420 12:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:04.421 12:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:04.421 12:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:04.421 12:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:04.421 12:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:04.421 12:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:04.421 12:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=216278 00:28:04.421 12:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 216278 /var/tmp/bperf.sock 00:28:04.421 12:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 216278 ']' 00:28:04.421 12:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:04.421 12:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:04.421 12:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:04.421 12:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:04.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:04.421 12:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:04.421 12:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:04.421 [2024-12-09 12:03:12.209937] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:28:04.421 [2024-12-09 12:03:12.210003] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid216278 ] 00:28:04.421 [2024-12-09 12:03:12.301769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.682 [2024-12-09 12:03:12.353781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.253 12:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:05.253 12:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:05.253 12:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:05.253 12:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:05.253 12:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:05.514 12:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:05.514 12:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.084 nvme0n1 00:28:06.084 12:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:06.084 12:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:06.084 Running I/O for 2 seconds... 00:28:07.971 18755.00 IOPS, 73.26 MiB/s [2024-12-09T11:03:15.857Z] 18885.00 IOPS, 73.77 MiB/s 00:28:07.971 Latency(us) 00:28:07.971 [2024-12-09T11:03:15.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.971 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:07.971 nvme0n1 : 2.00 18908.29 73.86 0.00 0.00 6761.87 2826.24 16602.45 00:28:07.971 [2024-12-09T11:03:15.857Z] =================================================================================================================== 00:28:07.971 [2024-12-09T11:03:15.857Z] Total : 18908.29 73.86 0.00 0.00 6761.87 2826.24 16602.45 00:28:07.971 { 00:28:07.971 "results": [ 00:28:07.971 { 00:28:07.971 "job": "nvme0n1", 00:28:07.971 "core_mask": "0x2", 00:28:07.971 "workload": "randread", 00:28:07.971 "status": "finished", 00:28:07.971 "queue_depth": 128, 00:28:07.971 "io_size": 4096, 00:28:07.971 "runtime": 2.004306, 00:28:07.971 "iops": 18908.29045065973, 00:28:07.971 "mibps": 73.86050957288957, 00:28:07.971 "io_failed": 0, 00:28:07.971 "io_timeout": 0, 00:28:07.971 "avg_latency_us": 6761.867913522262, 00:28:07.971 "min_latency_us": 2826.24, 00:28:07.971 "max_latency_us": 16602.453333333335 00:28:07.971 } 00:28:07.971 ], 00:28:07.971 "core_count": 1 00:28:07.971 } 00:28:08.232 12:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:08.232 12:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:08.232 12:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:08.232 12:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:08.232 | select(.opcode=="crc32c") 00:28:08.232 | "\(.module_name) \(.executed)"' 00:28:08.232 12:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:08.232 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:08.232 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:08.232 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:08.232 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:08.232 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 216278 00:28:08.232 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 216278 ']' 00:28:08.232 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 216278 00:28:08.232 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:08.232 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:08.232 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 216278 00:28:08.493 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:08.493 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:08.493 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 216278' 00:28:08.493 killing process with pid 216278 00:28:08.493 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 216278 00:28:08.493 Received shutdown signal, test time was about 2.000000 seconds 00:28:08.493 00:28:08.493 Latency(us) 00:28:08.493 [2024-12-09T11:03:16.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.493 [2024-12-09T11:03:16.379Z] =================================================================================================================== 00:28:08.493 [2024-12-09T11:03:16.379Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:08.493 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 216278 00:28:08.493 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:08.494 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:08.494 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:08.494 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:08.494 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:08.494 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:08.494 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:08.494 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=217160 00:28:08.494 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 217160 /var/tmp/bperf.sock 00:28:08.494 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 217160 ']' 00:28:08.494 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:08.494 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:08.494 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:08.494 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:08.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:08.494 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:08.494 12:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:08.494 [2024-12-09 12:03:16.274865] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:28:08.494 [2024-12-09 12:03:16.274923] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid217160 ] 00:28:08.494 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:08.494 Zero copy mechanism will not be used. 00:28:08.494 [2024-12-09 12:03:16.358967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.754 [2024-12-09 12:03:16.388573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.325 12:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:09.325 12:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:09.325 12:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:09.325 12:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:09.325 12:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:09.586 12:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:09.586 12:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:09.847 nvme0n1 00:28:09.847 12:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:09.847 12:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:09.847 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:09.847 Zero copy mechanism will not be used. 00:28:09.847 Running I/O for 2 seconds... 00:28:12.177 3792.00 IOPS, 474.00 MiB/s [2024-12-09T11:03:20.063Z] 3541.00 IOPS, 442.62 MiB/s 00:28:12.177 Latency(us) 00:28:12.177 [2024-12-09T11:03:20.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.177 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:12.177 nvme0n1 : 2.00 3544.71 443.09 0.00 0.00 4511.16 723.63 11304.96 00:28:12.177 [2024-12-09T11:03:20.063Z] =================================================================================================================== 00:28:12.177 [2024-12-09T11:03:20.063Z] Total : 3544.71 443.09 0.00 0.00 4511.16 723.63 11304.96 00:28:12.177 { 00:28:12.177 "results": [ 00:28:12.177 { 00:28:12.177 "job": "nvme0n1", 00:28:12.177 "core_mask": "0x2", 00:28:12.177 "workload": "randread", 00:28:12.177 "status": "finished", 00:28:12.177 "queue_depth": 16, 00:28:12.177 "io_size": 131072, 00:28:12.177 "runtime": 2.002422, 00:28:12.177 "iops": 3544.7073593877813, 00:28:12.177 "mibps": 443.08841992347266, 00:28:12.177 "io_failed": 0, 00:28:12.177 "io_timeout": 0, 00:28:12.177 "avg_latency_us": 4511.160492157415, 00:28:12.177 "min_latency_us": 723.6266666666667, 00:28:12.177 "max_latency_us": 11304.96 00:28:12.177 } 00:28:12.177 ], 00:28:12.177 "core_count": 1 00:28:12.177 } 00:28:12.177 12:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:12.177 12:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:12.177 12:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:12.177 12:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:12.177 | select(.opcode=="crc32c") 00:28:12.177 | "\(.module_name) \(.executed)"' 00:28:12.177 12:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:12.177 12:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:12.177 12:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:12.177 12:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:12.177 12:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:12.177 12:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 217160 00:28:12.177 12:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 217160 ']' 00:28:12.177 12:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 217160 00:28:12.177 12:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:12.177 12:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:12.177 12:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 217160 00:28:12.177 12:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:12.177 12:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:12.177 12:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 217160' 00:28:12.177 killing process with pid 217160 00:28:12.177 12:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 217160 00:28:12.177 Received shutdown signal, test time was about 2.000000 seconds 00:28:12.177 00:28:12.177 Latency(us) 00:28:12.177 [2024-12-09T11:03:20.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.177 [2024-12-09T11:03:20.063Z] =================================================================================================================== 00:28:12.177 [2024-12-09T11:03:20.063Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:12.177 12:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 217160 00:28:12.438 12:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:12.438 12:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:12.438 12:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:12.438 12:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:12.438 12:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:12.438 12:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:12.438 12:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:12.438 12:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=217903 00:28:12.438 12:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 217903 /var/tmp/bperf.sock 00:28:12.438 12:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 217903 ']' 00:28:12.438 12:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:12.439 12:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:12.439 12:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:12.439 12:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:12.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:12.439 12:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:12.439 12:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:12.439 [2024-12-09 12:03:20.112880] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:28:12.439 [2024-12-09 12:03:20.112934] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid217903 ] 00:28:12.439 [2024-12-09 12:03:20.198138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.439 [2024-12-09 12:03:20.227390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.381 12:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:13.381 12:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:13.381 12:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:13.381 12:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:13.381 12:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:13.381 12:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:13.381 12:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:13.643 nvme0n1 00:28:13.643 12:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:13.643 12:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:13.905 Running I/O for 2 seconds... 00:28:15.793 29672.00 IOPS, 115.91 MiB/s [2024-12-09T11:03:23.679Z] 29744.00 IOPS, 116.19 MiB/s 00:28:15.793 Latency(us) 00:28:15.793 [2024-12-09T11:03:23.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.793 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:15.793 nvme0n1 : 2.01 29745.25 116.19 0.00 0.00 4295.99 2170.88 10212.69 00:28:15.793 [2024-12-09T11:03:23.679Z] =================================================================================================================== 00:28:15.793 [2024-12-09T11:03:23.679Z] Total : 29745.25 116.19 0.00 0.00 4295.99 2170.88 10212.69 00:28:15.793 { 00:28:15.793 "results": [ 00:28:15.793 { 00:28:15.793 "job": "nvme0n1", 00:28:15.793 "core_mask": "0x2", 00:28:15.793 "workload": "randwrite", 00:28:15.793 "status": "finished", 00:28:15.793 "queue_depth": 128, 00:28:15.793 "io_size": 4096, 00:28:15.793 "runtime": 2.005564, 00:28:15.793 "iops": 29745.24871806634, 00:28:15.793 "mibps": 116.19237780494664, 00:28:15.793 "io_failed": 0, 00:28:15.793 "io_timeout": 0, 00:28:15.793 "avg_latency_us": 4295.985423986411, 00:28:15.793 "min_latency_us": 2170.88, 00:28:15.793 "max_latency_us": 10212.693333333333 00:28:15.793 } 00:28:15.793 ], 00:28:15.793 "core_count": 1 00:28:15.793 } 00:28:15.793 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:15.793 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:15.793 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:15.793 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:15.793 | select(.opcode=="crc32c") 00:28:15.793 | "\(.module_name) \(.executed)"' 00:28:15.793 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:16.053 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:16.053 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:16.053 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:16.053 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:16.053 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 217903 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 217903 ']' 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 217903 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 217903 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 217903' 00:28:16.054 killing process with pid 217903 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 217903 00:28:16.054 Received shutdown signal, test time was about 2.000000 seconds 00:28:16.054 00:28:16.054 Latency(us) 00:28:16.054 [2024-12-09T11:03:23.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.054 [2024-12-09T11:03:23.940Z] =================================================================================================================== 00:28:16.054 [2024-12-09T11:03:23.940Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 217903 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=218589 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 218589 /var/tmp/bperf.sock 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 218589 ']' 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:16.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:16.054 12:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:16.315 [2024-12-09 12:03:23.971996] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:28:16.315 [2024-12-09 12:03:23.972055] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid218589 ] 00:28:16.315 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:16.315 Zero copy mechanism will not be used. 00:28:16.315 [2024-12-09 12:03:24.056673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.315 [2024-12-09 12:03:24.086222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.888 12:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:16.888 12:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:16.888 12:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:16.888 12:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:16.888 12:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:17.150 12:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.150 12:03:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.410 nvme0n1 00:28:17.410 12:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:17.410 12:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:17.410 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:17.410 Zero copy mechanism will not be used. 00:28:17.410 Running I/O for 2 seconds... 00:28:19.739 3687.00 IOPS, 460.88 MiB/s [2024-12-09T11:03:27.625Z] 4835.50 IOPS, 604.44 MiB/s 00:28:19.739 Latency(us) 00:28:19.739 [2024-12-09T11:03:27.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.739 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:19.739 nvme0n1 : 2.00 4831.92 603.99 0.00 0.00 3305.65 1133.23 14636.37 00:28:19.739 [2024-12-09T11:03:27.625Z] =================================================================================================================== 00:28:19.739 [2024-12-09T11:03:27.625Z] Total : 4831.92 603.99 0.00 0.00 3305.65 1133.23 14636.37 00:28:19.739 { 00:28:19.739 "results": [ 00:28:19.739 { 00:28:19.739 "job": "nvme0n1", 00:28:19.739 "core_mask": "0x2", 00:28:19.739 "workload": "randwrite", 00:28:19.739 "status": "finished", 00:28:19.739 "queue_depth": 16, 00:28:19.739 "io_size": 131072, 00:28:19.739 "runtime": 2.004793, 00:28:19.739 "iops": 4831.920302993875, 00:28:19.739 "mibps": 603.9900378742344, 00:28:19.739 "io_failed": 0, 00:28:19.739 "io_timeout": 0, 00:28:19.739 "avg_latency_us": 3305.6521275936825, 00:28:19.739 "min_latency_us": 1133.2266666666667, 00:28:19.739 "max_latency_us": 14636.373333333333 00:28:19.739 } 00:28:19.739 ], 00:28:19.739 "core_count": 1 00:28:19.739 } 00:28:19.739 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:19.739 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:19.739 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:19.739 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:19.739 | select(.opcode=="crc32c") 00:28:19.739 | "\(.module_name) \(.executed)"' 00:28:19.739 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:19.739 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:19.739 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:19.739 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:19.739 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:19.739 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 218589 00:28:19.739 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 218589 ']' 00:28:19.739 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 218589 00:28:19.739 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:19.739 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:19.739 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 218589 00:28:19.739 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:19.739 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:19.739 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 218589' 00:28:19.739 killing process with pid 218589 00:28:19.739 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 218589 00:28:19.739 Received shutdown signal, test time was about 2.000000 seconds 00:28:19.739 00:28:19.739 Latency(us) 00:28:19.739 [2024-12-09T11:03:27.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.739 [2024-12-09T11:03:27.625Z] =================================================================================================================== 00:28:19.739 [2024-12-09T11:03:27.625Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:19.739 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 218589 00:28:20.000 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 216180 00:28:20.000 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 216180 ']' 00:28:20.000 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 216180 00:28:20.000 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:20.000 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.000 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 216180 00:28:20.000 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:20.000 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:20.000 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 216180' 00:28:20.000 killing process with pid 216180 00:28:20.000 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 216180 00:28:20.000 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 216180 00:28:20.000 00:28:20.000 real 0m16.740s 00:28:20.000 user 0m33.034s 00:28:20.000 sys 0m3.768s 00:28:20.000 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:20.000 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:20.000 ************************************ 00:28:20.000 END TEST nvmf_digest_clean 00:28:20.000 ************************************ 00:28:20.000 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:20.000 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:20.000 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:20.000 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:20.261 ************************************ 00:28:20.261 START TEST nvmf_digest_error 00:28:20.261 ************************************ 00:28:20.261 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:28:20.261 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:20.261 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:20.261 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:20.261 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.261 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=219310 00:28:20.261 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 219310 00:28:20.261 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:20.261 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 219310 ']' 00:28:20.261 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.261 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:20.261 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.261 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:20.261 12:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.261 [2024-12-09 12:03:27.982522] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:28:20.261 [2024-12-09 12:03:27.982576] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.261 [2024-12-09 12:03:28.072496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.261 [2024-12-09 12:03:28.103139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.261 [2024-12-09 12:03:28.103168] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.261 [2024-12-09 12:03:28.103175] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:20.261 [2024-12-09 12:03:28.103179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:20.261 [2024-12-09 12:03:28.103184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.261 [2024-12-09 12:03:28.103656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.203 [2024-12-09 12:03:28.817616] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.203 null0 00:28:21.203 [2024-12-09 12:03:28.896452] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.203 [2024-12-09 12:03:28.920660] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=219650 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 219650 /var/tmp/bperf.sock 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 219650 ']' 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:21.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:21.203 12:03:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.203 [2024-12-09 12:03:28.975507] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:28:21.204 [2024-12-09 12:03:28.975554] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid219650 ] 00:28:21.204 [2024-12-09 12:03:29.057465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.204 [2024-12-09 12:03:29.087305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.157 12:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.157 12:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:22.157 12:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:22.157 12:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:22.157 12:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:22.157 12:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.157 12:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:22.157 12:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.157 12:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:22.157 12:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:22.729 nvme0n1 00:28:22.729 12:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:22.729 12:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.729 12:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:22.729 12:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.729 12:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:22.729 12:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:22.729 Running I/O for 2 seconds... 00:28:22.729 [2024-12-09 12:03:30.485324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.729 [2024-12-09 12:03:30.485356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-12-09 12:03:30.485366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.729 [2024-12-09 12:03:30.496941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.729 [2024-12-09 12:03:30.496962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-12-09 12:03:30.496969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.729 [2024-12-09 12:03:30.507406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.729 [2024-12-09 12:03:30.507424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-12-09 12:03:30.507431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.729 [2024-12-09 12:03:30.514414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.729 [2024-12-09 12:03:30.514432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-12-09 12:03:30.514439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.729 [2024-12-09 12:03:30.525105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.729 [2024-12-09 12:03:30.525123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-12-09 12:03:30.525130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.729 [2024-12-09 12:03:30.534024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.729 [2024-12-09 12:03:30.534041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-12-09 12:03:30.534048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.729 [2024-12-09 12:03:30.543893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.729 [2024-12-09 12:03:30.543911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-12-09 12:03:30.543918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.729 [2024-12-09 12:03:30.553085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.729 [2024-12-09 12:03:30.553103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-12-09 12:03:30.553109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.729 [2024-12-09 12:03:30.562113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.729 [2024-12-09 12:03:30.562131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-12-09 12:03:30.562144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.729 [2024-12-09 12:03:30.570316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.729 [2024-12-09 12:03:30.570333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-12-09 12:03:30.570340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.729 [2024-12-09 12:03:30.582369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.729 [2024-12-09 12:03:30.582386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.729 [2024-12-09 12:03:30.582393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.729 [2024-12-09 12:03:30.591492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.730 [2024-12-09 12:03:30.591510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.730 [2024-12-09 12:03:30.591517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.730 [2024-12-09 12:03:30.599561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.730 [2024-12-09 12:03:30.599578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.730 [2024-12-09 12:03:30.599585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.730 [2024-12-09 12:03:30.608863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.730 [2024-12-09 12:03:30.608880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.730 [2024-12-09 12:03:30.608887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.991 [2024-12-09 12:03:30.618393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.991 [2024-12-09 12:03:30.618409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.991 [2024-12-09 12:03:30.618415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.991 [2024-12-09 12:03:30.626771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.991 [2024-12-09 12:03:30.626788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.991 [2024-12-09 12:03:30.626795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.991 [2024-12-09 12:03:30.635852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.991 [2024-12-09 12:03:30.635869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.991 [2024-12-09 12:03:30.635875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.991 [2024-12-09 12:03:30.644439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.991 [2024-12-09 12:03:30.644459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.991 [2024-12-09 12:03:30.644466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.991 [2024-12-09 12:03:30.653647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.991 [2024-12-09 12:03:30.653665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.991 [2024-12-09 12:03:30.653671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.991 [2024-12-09 12:03:30.663912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.991 [2024-12-09 12:03:30.663929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.991 [2024-12-09 12:03:30.663935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.991 [2024-12-09 12:03:30.671434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.991 [2024-12-09 12:03:30.671451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.991 [2024-12-09 12:03:30.671457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.991 [2024-12-09 12:03:30.680703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.991 [2024-12-09 12:03:30.680721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.991 [2024-12-09 12:03:30.680727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.991 [2024-12-09 12:03:30.690850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.991 [2024-12-09 12:03:30.690867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.991 [2024-12-09 12:03:30.690873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.991 [2024-12-09 12:03:30.698355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.991 [2024-12-09 12:03:30.698372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.991 [2024-12-09 12:03:30.698379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.991 [2024-12-09 12:03:30.708155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.991 [2024-12-09 12:03:30.708172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.991 [2024-12-09 12:03:30.708178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.991 [2024-12-09 12:03:30.715794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.991 [2024-12-09 12:03:30.715811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.991 [2024-12-09 12:03:30.715818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.991 [2024-12-09 12:03:30.725353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.991 [2024-12-09 12:03:30.725371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.991 [2024-12-09 12:03:30.725377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.991 [2024-12-09 12:03:30.734019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.991 [2024-12-09 12:03:30.734036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.991 [2024-12-09 12:03:30.734042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.991 [2024-12-09 12:03:30.743752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.991 [2024-12-09 12:03:30.743769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.991 [2024-12-09 12:03:30.743775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.991 [2024-12-09 12:03:30.752201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.991 [2024-12-09 12:03:30.752218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.991 [2024-12-09 12:03:30.752225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.991 [2024-12-09 12:03:30.760876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.991 [2024-12-09 12:03:30.760893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.991 [2024-12-09 12:03:30.760900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.991 [2024-12-09 12:03:30.769768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.992 [2024-12-09 12:03:30.769784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.992 [2024-12-09 12:03:30.769791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.992 [2024-12-09 12:03:30.778933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.992 [2024-12-09 12:03:30.778950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.992 [2024-12-09 12:03:30.778956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.992 [2024-12-09 12:03:30.788921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.992 [2024-12-09 12:03:30.788938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.992 [2024-12-09 12:03:30.788944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.992 [2024-12-09 12:03:30.797702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.992 [2024-12-09 12:03:30.797719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.992 [2024-12-09 12:03:30.797728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.992 [2024-12-09 12:03:30.806117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.992 [2024-12-09 12:03:30.806134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.992 [2024-12-09 12:03:30.806141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.992 [2024-12-09 12:03:30.815480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.992 [2024-12-09 12:03:30.815496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.992 [2024-12-09 12:03:30.815502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.992 [2024-12-09 12:03:30.824850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.992 [2024-12-09 12:03:30.824867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.992 [2024-12-09 12:03:30.824873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.992 [2024-12-09 12:03:30.834613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.992 [2024-12-09 12:03:30.834631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.992 [2024-12-09 12:03:30.834642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.992 [2024-12-09 12:03:30.843258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.992 [2024-12-09 12:03:30.843275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.992 [2024-12-09 12:03:30.843281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.992 [2024-12-09 12:03:30.852018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.992 [2024-12-09 12:03:30.852035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.992 [2024-12-09 12:03:30.852041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.992 [2024-12-09 12:03:30.861391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.992 [2024-12-09 12:03:30.861408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.992 [2024-12-09 12:03:30.861415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.992 [2024-12-09 12:03:30.870224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:22.992 [2024-12-09 12:03:30.870241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.992 [2024-12-09 12:03:30.870248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.253 [2024-12-09 12:03:30.881690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.253 [2024-12-09 12:03:30.881710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:30.881716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:30.890209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:30.890226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:30.890232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:30.901553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:30.901570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:30.901576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:30.911962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:30.911979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:30.911986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:30.921298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:30.921314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:30.921320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:30.929002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:30.929019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:30.929025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:30.938683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:30.938700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:30.938706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:30.948253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:30.948270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:30.948276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:30.957562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:30.957579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:30.957585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:30.964981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:30.964997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:30.965004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:30.974693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:30.974710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:30.974717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:30.984385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:30.984401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:30.984407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:30.993261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:30.993278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:30.993284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:31.003244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:31.003261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:31.003268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:31.010350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:31.010367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:31.010374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:31.022258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:31.022275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:31.022281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:31.030352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:31.030368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:31.030375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:31.038791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:31.038808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:31.038817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:31.048481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:31.048498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:31.048504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:31.055990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:31.056007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:31.056014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:31.067256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:31.067273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:31.067279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:31.077889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:31.077906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:31.077912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:31.087689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:31.087706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:31.087712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:31.095494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:31.095511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:31.095518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:31.105016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:31.105032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:31.105039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:31.114551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:31.114569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:31.114575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:31.121915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:31.121932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:31.121939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.254 [2024-12-09 12:03:31.131096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.254 [2024-12-09 12:03:31.131113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.254 [2024-12-09 12:03:31.131119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.516 [2024-12-09 12:03:31.141049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.516 [2024-12-09 12:03:31.141066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.516 [2024-12-09 12:03:31.141073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.516 [2024-12-09 12:03:31.149586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.516 [2024-12-09 12:03:31.149603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.516 [2024-12-09 12:03:31.149609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.516 [2024-12-09 12:03:31.157401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.516 [2024-12-09 12:03:31.157418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.516 [2024-12-09 12:03:31.157424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.516 [2024-12-09 12:03:31.166981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.516 [2024-12-09 12:03:31.166998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.516 [2024-12-09 12:03:31.167004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.516 [2024-12-09 12:03:31.176156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.516 [2024-12-09 12:03:31.176173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.516 [2024-12-09 12:03:31.176179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.516 [2024-12-09 12:03:31.183959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.516 [2024-12-09 12:03:31.183976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.516 [2024-12-09 12:03:31.183982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.516 [2024-12-09 12:03:31.193327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.516 [2024-12-09 12:03:31.193344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.516 [2024-12-09 12:03:31.193354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.516 [2024-12-09 12:03:31.201377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.516 [2024-12-09 12:03:31.201394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.516 [2024-12-09 12:03:31.201400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.516 [2024-12-09 12:03:31.211085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.516 [2024-12-09 12:03:31.211101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.516 [2024-12-09 12:03:31.211107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.516 [2024-12-09 12:03:31.219502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.516 [2024-12-09 12:03:31.219519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.516 [2024-12-09 12:03:31.219525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.516 [2024-12-09 12:03:31.228367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.516 [2024-12-09 12:03:31.228384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.516 [2024-12-09 12:03:31.228390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.516 [2024-12-09 12:03:31.237603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.516 [2024-12-09 12:03:31.237620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.516 [2024-12-09 12:03:31.237626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.516 [2024-12-09 12:03:31.246556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.516 [2024-12-09 12:03:31.246573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.516 [2024-12-09 12:03:31.246579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.516 [2024-12-09 12:03:31.254808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.516 [2024-12-09 12:03:31.254825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.516 [2024-12-09 12:03:31.254831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.516 [2024-12-09 12:03:31.264326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.516 [2024-12-09 12:03:31.264343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.516 [2024-12-09 12:03:31.264349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.516 [2024-12-09 12:03:31.272813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.516 [2024-12-09 12:03:31.272834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.516 [2024-12-09 12:03:31.272840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.516 [2024-12-09 12:03:31.281619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.516 [2024-12-09 12:03:31.281636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.516 [2024-12-09 12:03:31.281646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.516 [2024-12-09 12:03:31.290222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.516 [2024-12-09 12:03:31.290240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.516 [2024-12-09 12:03:31.290246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.516 [2024-12-09 12:03:31.300428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.516 [2024-12-09 12:03:31.300445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.516 [2024-12-09 12:03:31.300451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.516 [2024-12-09 12:03:31.308039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.516 [2024-12-09 12:03:31.308056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.517 [2024-12-09 12:03:31.308062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.517 [2024-12-09 12:03:31.317649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.517 [2024-12-09 12:03:31.317665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.517 [2024-12-09 12:03:31.317671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.517 [2024-12-09 12:03:31.327142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.517 [2024-12-09 12:03:31.327159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.517 [2024-12-09 12:03:31.327165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.517 [2024-12-09 12:03:31.335569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.517 [2024-12-09 12:03:31.335586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.517 [2024-12-09 12:03:31.335593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.517 [2024-12-09 12:03:31.344240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.517 [2024-12-09 12:03:31.344256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.517 [2024-12-09 12:03:31.344263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.517 [2024-12-09 12:03:31.353318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.517 [2024-12-09 12:03:31.353336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.517 [2024-12-09 12:03:31.353342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.517 [2024-12-09 12:03:31.361907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.517 [2024-12-09 12:03:31.361924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.517 [2024-12-09 12:03:31.361930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.517 [2024-12-09 12:03:31.371117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.517 [2024-12-09 12:03:31.371134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.517 [2024-12-09 12:03:31.371140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.517 [2024-12-09 12:03:31.380319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.517 [2024-12-09 12:03:31.380336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.517 [2024-12-09 12:03:31.380342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.517 [2024-12-09 12:03:31.388654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.517 [2024-12-09 12:03:31.388672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.517 [2024-12-09 12:03:31.388678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.517 [2024-12-09 12:03:31.398357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.517 [2024-12-09 12:03:31.398374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.517 [2024-12-09 12:03:31.398380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.779 [2024-12-09 12:03:31.407257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.779 [2024-12-09 12:03:31.407275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.779 [2024-12-09 12:03:31.407281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.779 [2024-12-09 12:03:31.414955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.779 [2024-12-09 12:03:31.414972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.779 [2024-12-09 12:03:31.414979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.779 [2024-12-09 12:03:31.424690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.779 [2024-12-09 12:03:31.424708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.779 [2024-12-09 12:03:31.424718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.779 [2024-12-09 12:03:31.433583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.779 [2024-12-09 12:03:31.433600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.779 [2024-12-09 12:03:31.433607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.779 [2024-12-09 12:03:31.442321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.779 [2024-12-09 12:03:31.442338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.779 [2024-12-09 12:03:31.442345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.779 [2024-12-09 12:03:31.450851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.779 [2024-12-09 12:03:31.450868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.779 [2024-12-09 12:03:31.450875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.779 [2024-12-09 12:03:31.459220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.779 [2024-12-09 12:03:31.459238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.779 [2024-12-09 12:03:31.459245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.779 27720.00 IOPS, 108.28 MiB/s [2024-12-09T11:03:31.665Z] [2024-12-09 12:03:31.469356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.779 [2024-12-09 12:03:31.469374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.779 [2024-12-09 12:03:31.469381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.779 [2024-12-09 12:03:31.479327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.779 [2024-12-09 12:03:31.479344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.779 [2024-12-09 12:03:31.479350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.779 [2024-12-09 12:03:31.488981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.779 [2024-12-09 12:03:31.488998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.779 [2024-12-09 12:03:31.489004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.779 [2024-12-09 12:03:31.498470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.779 [2024-12-09 12:03:31.498487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.779 [2024-12-09 12:03:31.498494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.779 [2024-12-09 12:03:31.506720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.779 [2024-12-09 12:03:31.506741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.779 [2024-12-09 12:03:31.506748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.779 [2024-12-09 12:03:31.515924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.780 [2024-12-09 12:03:31.515942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.780 [2024-12-09 12:03:31.515948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.780 [2024-12-09 12:03:31.525135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.780 [2024-12-09 12:03:31.525151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.780 [2024-12-09 12:03:31.525158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.780 [2024-12-09 12:03:31.533931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.780 [2024-12-09 12:03:31.533947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.780 [2024-12-09 12:03:31.533953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.780 [2024-12-09 12:03:31.542863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.780 [2024-12-09 12:03:31.542879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.780 [2024-12-09 12:03:31.542886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.780 [2024-12-09 12:03:31.550805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.780 [2024-12-09 12:03:31.550822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.780 [2024-12-09 12:03:31.550829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.780 [2024-12-09 12:03:31.560103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.780 [2024-12-09 12:03:31.560120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.780 [2024-12-09 12:03:31.560126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.780 [2024-12-09 12:03:31.569520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.780 [2024-12-09 12:03:31.569538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.780 [2024-12-09 12:03:31.569545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.780 [2024-12-09 12:03:31.577847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.780 [2024-12-09 12:03:31.577864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.780 [2024-12-09 12:03:31.577873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.780 [2024-12-09 12:03:31.587009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.780 [2024-12-09 12:03:31.587027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.780 [2024-12-09 12:03:31.587035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.780 [2024-12-09 12:03:31.595733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.780 [2024-12-09 12:03:31.595750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.780 [2024-12-09 12:03:31.595756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.780 [2024-12-09 12:03:31.604444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.780 [2024-12-09 12:03:31.604461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.780 [2024-12-09 12:03:31.604467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.780 [2024-12-09 12:03:31.613160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.780 [2024-12-09 12:03:31.613178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.780 [2024-12-09 12:03:31.613185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.780 [2024-12-09 12:03:31.622388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.780 [2024-12-09 12:03:31.622405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.780 [2024-12-09 12:03:31.622412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.780 [2024-12-09 12:03:31.630216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.780 [2024-12-09 12:03:31.630233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.780 [2024-12-09 12:03:31.630240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.780 [2024-12-09 12:03:31.639696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.780 [2024-12-09 12:03:31.639713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.780 [2024-12-09 12:03:31.639719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.780 [2024-12-09 12:03:31.648736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.780 [2024-12-09 12:03:31.648754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.780 [2024-12-09 12:03:31.648760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.780 [2024-12-09 12:03:31.658167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:23.780 [2024-12-09 12:03:31.658186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.780 [2024-12-09 12:03:31.658193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.042 [2024-12-09 12:03:31.666492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.042 [2024-12-09 12:03:31.666508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.042 [2024-12-09 12:03:31.666515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.042 [2024-12-09 12:03:31.676031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.042 [2024-12-09 12:03:31.676048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.042 [2024-12-09 12:03:31.676054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.042 [2024-12-09 12:03:31.684696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.042 [2024-12-09 12:03:31.684713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.042 [2024-12-09 12:03:31.684719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.042 [2024-12-09 12:03:31.693710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.042 [2024-12-09 12:03:31.693727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.042 [2024-12-09 12:03:31.693733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.042 [2024-12-09 12:03:31.703283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.042 [2024-12-09 12:03:31.703299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.042 [2024-12-09 12:03:31.703305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.042 [2024-12-09 12:03:31.712358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.042 [2024-12-09 12:03:31.712375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.042 [2024-12-09 12:03:31.712381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.042 [2024-12-09 12:03:31.720014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.042 [2024-12-09 12:03:31.720031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.042 [2024-12-09 12:03:31.720038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.042 [2024-12-09 12:03:31.729062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.042 [2024-12-09 12:03:31.729079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.042 [2024-12-09 12:03:31.729085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.042 [2024-12-09 12:03:31.737910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.042 [2024-12-09 12:03:31.737927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.042 [2024-12-09 12:03:31.737934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.042 [2024-12-09 12:03:31.746363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.042 [2024-12-09 12:03:31.746381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.042 [2024-12-09 12:03:31.746387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.042 [2024-12-09 12:03:31.755852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.042 [2024-12-09 12:03:31.755870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.042 [2024-12-09 12:03:31.755876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.042 [2024-12-09 12:03:31.763793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.042 [2024-12-09 12:03:31.763809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.042 [2024-12-09 12:03:31.763816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.042 [2024-12-09 12:03:31.772864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.042 [2024-12-09 12:03:31.772881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.042 [2024-12-09 12:03:31.772887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.042 [2024-12-09 12:03:31.785244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.042 [2024-12-09 12:03:31.785261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.042 [2024-12-09 12:03:31.785267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.042 [2024-12-09 12:03:31.797191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.042 [2024-12-09 12:03:31.797208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.042 [2024-12-09 12:03:31.797214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.042 [2024-12-09 12:03:31.806373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.042 [2024-12-09 12:03:31.806389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.042 [2024-12-09 12:03:31.806396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.042 [2024-12-09 12:03:31.814358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.042 [2024-12-09 12:03:31.814375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.042 [2024-12-09 12:03:31.814384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.042 [2024-12-09 12:03:31.824536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.042 [2024-12-09 12:03:31.824553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.042 [2024-12-09 12:03:31.824559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.042 [2024-12-09 12:03:31.833626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.042 [2024-12-09 12:03:31.833648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.042 [2024-12-09 12:03:31.833656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.042 [2024-12-09 12:03:31.841780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.042 [2024-12-09 12:03:31.841797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.043 [2024-12-09 12:03:31.841803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.043 [2024-12-09 12:03:31.851476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.043 [2024-12-09 12:03:31.851493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.043 [2024-12-09 12:03:31.851499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.043 [2024-12-09 12:03:31.859527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.043 [2024-12-09 12:03:31.859543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.043 [2024-12-09 12:03:31.859550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.043 [2024-12-09 12:03:31.869609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.043 [2024-12-09 12:03:31.869626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.043 [2024-12-09 12:03:31.869633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.043 [2024-12-09 12:03:31.880067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.043 [2024-12-09 12:03:31.880084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.043 [2024-12-09 12:03:31.880090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.043 [2024-12-09 12:03:31.889010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.043 [2024-12-09 12:03:31.889028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.043 [2024-12-09 12:03:31.889034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.043 [2024-12-09 12:03:31.898458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.043 [2024-12-09 12:03:31.898480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.043 [2024-12-09 12:03:31.898486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.043 [2024-12-09 12:03:31.907449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.043 [2024-12-09 12:03:31.907467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.043 [2024-12-09 12:03:31.907473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.043 [2024-12-09 12:03:31.916311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.043 [2024-12-09 12:03:31.916329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.043 [2024-12-09 12:03:31.916335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.043 [2024-12-09 12:03:31.924884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.043 [2024-12-09 12:03:31.924901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.043 [2024-12-09 12:03:31.924907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:31.934786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.305 [2024-12-09 12:03:31.934803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.305 [2024-12-09 12:03:31.934810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:31.943095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.305 [2024-12-09 12:03:31.943111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.305 [2024-12-09 12:03:31.943117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:31.952162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.305 [2024-12-09 12:03:31.952180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.305 [2024-12-09 12:03:31.952186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:31.961683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.305 [2024-12-09 12:03:31.961700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.305 [2024-12-09 12:03:31.961707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:31.969844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.305 [2024-12-09 12:03:31.969861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.305 [2024-12-09 12:03:31.969867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:31.978751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.305 [2024-12-09 12:03:31.978768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.305 [2024-12-09 12:03:31.978775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:31.988812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.305 [2024-12-09 12:03:31.988829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.305 [2024-12-09 12:03:31.988835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:31.997194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.305 [2024-12-09 12:03:31.997211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.305 [2024-12-09 12:03:31.997217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:32.006027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.305 [2024-12-09 12:03:32.006044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.305 [2024-12-09 12:03:32.006051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:32.014815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.305 [2024-12-09 12:03:32.014833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.305 [2024-12-09 12:03:32.014839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:32.024085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.305 [2024-12-09 12:03:32.024102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.305 [2024-12-09 12:03:32.024109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:32.032896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.305 [2024-12-09 12:03:32.032912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.305 [2024-12-09 12:03:32.032919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:32.041644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.305 [2024-12-09 12:03:32.041660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.305 [2024-12-09 12:03:32.041667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:32.050190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.305 [2024-12-09 12:03:32.050207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.305 [2024-12-09 12:03:32.050216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:32.059389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.305 [2024-12-09 12:03:32.059406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.305 [2024-12-09 12:03:32.059413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:32.069103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.305 [2024-12-09 12:03:32.069120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.305 [2024-12-09 12:03:32.069126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:32.076798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.305 [2024-12-09 12:03:32.076816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.305 [2024-12-09 12:03:32.076823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:32.086579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.305 [2024-12-09 12:03:32.086596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.305 [2024-12-09 12:03:32.086602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:32.094469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.305 [2024-12-09 12:03:32.094487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.305 [2024-12-09 12:03:32.094493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:32.103651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.305 [2024-12-09 12:03:32.103668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.305 [2024-12-09 12:03:32.103674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:32.113349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.305 [2024-12-09 12:03:32.113367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.305 [2024-12-09 12:03:32.113373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:32.121155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.305 [2024-12-09 12:03:32.121173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.305 [2024-12-09 12:03:32.121179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:32.130337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.305 [2024-12-09 12:03:32.130356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.305 [2024-12-09 12:03:32.130363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.305 [2024-12-09 12:03:32.139695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.306 [2024-12-09 12:03:32.139712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.306 [2024-12-09 12:03:32.139718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.306 [2024-12-09 12:03:32.148661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.306 [2024-12-09 12:03:32.148678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.306 [2024-12-09 12:03:32.148685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.306 [2024-12-09 12:03:32.157642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.306 [2024-12-09 12:03:32.157659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.306 [2024-12-09 12:03:32.157665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.306 [2024-12-09 12:03:32.166241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.306 [2024-12-09 12:03:32.166258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.306 [2024-12-09 12:03:32.166264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.306 [2024-12-09 12:03:32.176282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.306 [2024-12-09 12:03:32.176299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.306 [2024-12-09 12:03:32.176305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.306 [2024-12-09 12:03:32.184578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.306 [2024-12-09 12:03:32.184595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.306 [2024-12-09 12:03:32.184602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.567 [2024-12-09 12:03:32.193253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.567 [2024-12-09 12:03:32.193270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.567 [2024-12-09 12:03:32.193276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.567 [2024-12-09 12:03:32.202379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.567 [2024-12-09 12:03:32.202396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.567 [2024-12-09 12:03:32.202402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.567 [2024-12-09 12:03:32.211693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.567 [2024-12-09 12:03:32.211709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.567 [2024-12-09 12:03:32.211715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.567 [2024-12-09 12:03:32.219270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.567 [2024-12-09 12:03:32.219287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.567 [2024-12-09 12:03:32.219293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.567 [2024-12-09 12:03:32.228664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.567 [2024-12-09 12:03:32.228681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.567 [2024-12-09 12:03:32.228687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.567 [2024-12-09 12:03:32.237792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.567 [2024-12-09 12:03:32.237808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.567 [2024-12-09 12:03:32.237815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.567 [2024-12-09 12:03:32.246576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.567 [2024-12-09 12:03:32.246592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.567 [2024-12-09 12:03:32.246599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.567 [2024-12-09 12:03:32.255202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.567 [2024-12-09 12:03:32.255219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.567 [2024-12-09 12:03:32.255225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.567 [2024-12-09 12:03:32.265106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.567 [2024-12-09 12:03:32.265122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.567 [2024-12-09 12:03:32.265129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.567 [2024-12-09 12:03:32.274171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.567 [2024-12-09 12:03:32.274187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.567 [2024-12-09 12:03:32.274194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.567 [2024-12-09 12:03:32.283235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.567 [2024-12-09 12:03:32.283255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.567 [2024-12-09 12:03:32.283261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.567 [2024-12-09 12:03:32.290682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.567 [2024-12-09 12:03:32.290698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.567 [2024-12-09 12:03:32.290704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.567 [2024-12-09 12:03:32.300957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.567 [2024-12-09 12:03:32.300974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.567 [2024-12-09 12:03:32.300980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.567 [2024-12-09 12:03:32.310389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.568 [2024-12-09 12:03:32.310406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.568 [2024-12-09 12:03:32.310412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.568 [2024-12-09 12:03:32.317945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.568 [2024-12-09 12:03:32.317962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.568 [2024-12-09 12:03:32.317968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.568 [2024-12-09 12:03:32.330047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.568 [2024-12-09 12:03:32.330064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.568 [2024-12-09 12:03:32.330070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.568 [2024-12-09 12:03:32.338018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.568 [2024-12-09 12:03:32.338035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.568 [2024-12-09 12:03:32.338041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.568 [2024-12-09 12:03:32.349759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.568 [2024-12-09 12:03:32.349776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.568 [2024-12-09 12:03:32.349782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.568 [2024-12-09 12:03:32.361872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.568 [2024-12-09 12:03:32.361888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.568 [2024-12-09 12:03:32.361895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.568 [2024-12-09 12:03:32.372548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.568 [2024-12-09 12:03:32.372565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.568 [2024-12-09 12:03:32.372571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.568 [2024-12-09 12:03:32.383012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.568 [2024-12-09 12:03:32.383029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.568 [2024-12-09 12:03:32.383035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.568 [2024-12-09 12:03:32.391163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.568 [2024-12-09 12:03:32.391180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.568 [2024-12-09 12:03:32.391187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.568 [2024-12-09 12:03:32.401616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.568 [2024-12-09 12:03:32.401633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.568 [2024-12-09 12:03:32.401645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.568 [2024-12-09 12:03:32.411873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.568 [2024-12-09 12:03:32.411889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.568 [2024-12-09 12:03:32.411895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.568 [2024-12-09 12:03:32.421489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.568 [2024-12-09 12:03:32.421506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.568 [2024-12-09 12:03:32.421512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.568 [2024-12-09 12:03:32.428906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.568 [2024-12-09 12:03:32.428922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.568 [2024-12-09 12:03:32.428929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.568 [2024-12-09 12:03:32.438914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.568 [2024-12-09 12:03:32.438931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.568 [2024-12-09 12:03:32.438937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.568 [2024-12-09 12:03:32.449164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.568 [2024-12-09 12:03:32.449181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.568 [2024-12-09 12:03:32.449190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.829 [2024-12-09 12:03:32.459382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.829 [2024-12-09 12:03:32.459399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.829 [2024-12-09 12:03:32.459405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.829 [2024-12-09 12:03:32.466907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x85cd60) 00:28:24.829 [2024-12-09 12:03:32.466924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.829 [2024-12-09 12:03:32.466930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.829 27813.50 IOPS, 108.65 MiB/s 00:28:24.829 Latency(us) 00:28:24.829 [2024-12-09T11:03:32.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.829 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:24.829 nvme0n1 : 2.00 27830.06 108.71 0.00 0.00 4594.91 2225.49 18350.08 00:28:24.829 [2024-12-09T11:03:32.715Z] =================================================================================================================== 00:28:24.829 [2024-12-09T11:03:32.715Z] Total : 27830.06 108.71 0.00 0.00 4594.91 2225.49 18350.08 00:28:24.829 { 00:28:24.829 "results": [ 00:28:24.829 { 00:28:24.829 "job": "nvme0n1", 00:28:24.829 "core_mask": "0x2", 00:28:24.829 "workload": "randread", 00:28:24.829 "status": "finished", 00:28:24.829 "queue_depth": 128, 00:28:24.829 "io_size": 4096, 00:28:24.829 "runtime": 2.003409, 00:28:24.829 "iops": 27830.0636564975, 00:28:24.829 "mibps": 108.71118615819336, 00:28:24.829 "io_failed": 0, 00:28:24.829 "io_timeout": 0, 00:28:24.829 "avg_latency_us": 4594.911974172721, 00:28:24.829 "min_latency_us": 2225.4933333333333, 00:28:24.829 "max_latency_us": 18350.08 00:28:24.829 } 00:28:24.829 ], 00:28:24.829 "core_count": 1 00:28:24.829 } 00:28:24.829 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:24.829 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:24.829 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:24.829 | .driver_specific 00:28:24.829 | .nvme_error 00:28:24.829 | .status_code 00:28:24.829 | .command_transient_transport_error' 00:28:24.829 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:24.829 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 218 > 0 )) 00:28:24.829 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 219650 00:28:24.829 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 219650 ']' 00:28:24.829 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 219650 00:28:24.829 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:24.829 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:24.829 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 219650 00:28:25.090 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:25.090 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:25.090 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 219650' 00:28:25.090 killing process with pid 219650 00:28:25.090 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 219650 00:28:25.090 Received shutdown signal, test time was about 2.000000 seconds 00:28:25.090 00:28:25.090 Latency(us) 00:28:25.090 [2024-12-09T11:03:32.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.090 [2024-12-09T11:03:32.976Z] =================================================================================================================== 00:28:25.090 [2024-12-09T11:03:32.976Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:25.090 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 219650 00:28:25.090 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:25.090 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:25.090 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:25.090 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:25.090 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:25.090 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=220338 00:28:25.090 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 220338 /var/tmp/bperf.sock 00:28:25.090 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 220338 ']' 00:28:25.090 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:25.090 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:25.090 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:25.090 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:25.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:25.090 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:25.090 12:03:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.090 [2024-12-09 12:03:32.892205] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:28:25.090 [2024-12-09 12:03:32.892262] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid220338 ] 00:28:25.090 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:25.090 Zero copy mechanism will not be used. 00:28:25.350 [2024-12-09 12:03:32.977516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.350 [2024-12-09 12:03:33.006450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.921 12:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:25.921 12:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:25.921 12:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:25.921 12:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:26.182 12:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:26.182 12:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.182 12:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.182 12:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.182 12:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.182 12:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.442 nvme0n1 00:28:26.442 12:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:26.442 12:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.442 12:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.442 12:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.442 12:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:26.442 12:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:26.442 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:26.442 Zero copy mechanism will not be used. 00:28:26.442 Running I/O for 2 seconds... 00:28:26.442 [2024-12-09 12:03:34.312459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.442 [2024-12-09 12:03:34.312495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.442 [2024-12-09 12:03:34.312505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.442 [2024-12-09 12:03:34.322847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.442 [2024-12-09 12:03:34.322869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.442 [2024-12-09 12:03:34.322877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.704 [2024-12-09 12:03:34.331620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.704 [2024-12-09 12:03:34.331646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.704 [2024-12-09 12:03:34.331653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.704 [2024-12-09 12:03:34.341094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.704 [2024-12-09 12:03:34.341114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.704 [2024-12-09 12:03:34.341121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.704 [2024-12-09 12:03:34.349313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.704 [2024-12-09 12:03:34.349332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.704 [2024-12-09 12:03:34.349339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.704 [2024-12-09 12:03:34.358904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.704 [2024-12-09 12:03:34.358930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.704 [2024-12-09 12:03:34.358937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.704 [2024-12-09 12:03:34.368471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.704 [2024-12-09 12:03:34.368490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.704 [2024-12-09 12:03:34.368496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.704 [2024-12-09 12:03:34.373490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.704 [2024-12-09 12:03:34.373509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.704 [2024-12-09 12:03:34.373515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.704 [2024-12-09 12:03:34.378621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.704 [2024-12-09 12:03:34.378644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.704 [2024-12-09 12:03:34.378651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.704 [2024-12-09 12:03:34.383388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.704 [2024-12-09 12:03:34.383406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.705 [2024-12-09 12:03:34.383413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.705 [2024-12-09 12:03:34.390275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.705 [2024-12-09 12:03:34.390294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.705 [2024-12-09 12:03:34.390300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.705 [2024-12-09 12:03:34.402186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.705 [2024-12-09 12:03:34.402205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.705 [2024-12-09 12:03:34.402211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.705 [2024-12-09 12:03:34.414407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.705 [2024-12-09 12:03:34.414425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.705 [2024-12-09 12:03:34.414432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.705 [2024-12-09 12:03:34.426799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.705 [2024-12-09 12:03:34.426817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.705 [2024-12-09 12:03:34.426823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.705 [2024-12-09 12:03:34.439520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.705 [2024-12-09 12:03:34.439538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.705 [2024-12-09 12:03:34.439545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.705 [2024-12-09 12:03:34.451893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.705 [2024-12-09 12:03:34.451911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.705 [2024-12-09 12:03:34.451918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.705 [2024-12-09 12:03:34.464273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.705 [2024-12-09 12:03:34.464291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.705 [2024-12-09 12:03:34.464297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.705 [2024-12-09 12:03:34.476349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.705 [2024-12-09 12:03:34.476368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.705 [2024-12-09 12:03:34.476374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.705 [2024-12-09 12:03:34.489298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.705 [2024-12-09 12:03:34.489316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.705 [2024-12-09 12:03:34.489322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.705 [2024-12-09 12:03:34.501366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.705 [2024-12-09 12:03:34.501384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.705 [2024-12-09 12:03:34.501390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.705 [2024-12-09 12:03:34.512086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.705 [2024-12-09 12:03:34.512103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.705 [2024-12-09 12:03:34.512110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.705 [2024-12-09 12:03:34.523032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.705 [2024-12-09 12:03:34.523049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.705 [2024-12-09 12:03:34.523056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.705 [2024-12-09 12:03:34.529534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.705 [2024-12-09 12:03:34.529555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.705 [2024-12-09 12:03:34.529562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.705 [2024-12-09 12:03:34.534122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.705 [2024-12-09 12:03:34.534140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.705 [2024-12-09 12:03:34.534146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.705 [2024-12-09 12:03:34.545770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.705 [2024-12-09 12:03:34.545788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.705 [2024-12-09 12:03:34.545795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.705 [2024-12-09 12:03:34.556184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.705 [2024-12-09 12:03:34.556203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.705 [2024-12-09 12:03:34.556210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.705 [2024-12-09 12:03:34.564863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.705 [2024-12-09 12:03:34.564881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.705 [2024-12-09 12:03:34.564887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.705 [2024-12-09 12:03:34.570274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.705 [2024-12-09 12:03:34.570292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.705 [2024-12-09 12:03:34.570299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.705 [2024-12-09 12:03:34.577389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.705 [2024-12-09 12:03:34.577407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.705 [2024-12-09 12:03:34.577413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.705 [2024-12-09 12:03:34.583650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.705 [2024-12-09 12:03:34.583667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.705 [2024-12-09 12:03:34.583674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.967 [2024-12-09 12:03:34.588601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.967 [2024-12-09 12:03:34.588619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.967 [2024-12-09 12:03:34.588626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.967 [2024-12-09 12:03:34.597646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.967 [2024-12-09 12:03:34.597663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.967 [2024-12-09 12:03:34.597670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.967 [2024-12-09 12:03:34.607646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.967 [2024-12-09 12:03:34.607665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.967 [2024-12-09 12:03:34.607671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.967 [2024-12-09 12:03:34.617141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.967 [2024-12-09 12:03:34.617160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.967 [2024-12-09 12:03:34.617166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.967 [2024-12-09 12:03:34.626000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.967 [2024-12-09 12:03:34.626019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.967 [2024-12-09 12:03:34.626025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.967 [2024-12-09 12:03:34.636359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.967 [2024-12-09 12:03:34.636378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.967 [2024-12-09 12:03:34.636384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.967 [2024-12-09 12:03:34.647139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.647157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.647164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.968 [2024-12-09 12:03:34.656994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.657012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.657018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.968 [2024-12-09 12:03:34.667957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.667975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.667981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.968 [2024-12-09 12:03:34.677999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.678018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.678028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.968 [2024-12-09 12:03:34.689055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.689074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.689081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.968 [2024-12-09 12:03:34.693300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.693319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.693325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.968 [2024-12-09 12:03:34.697877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.697896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.697902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.968 [2024-12-09 12:03:34.707963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.707981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.707987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.968 [2024-12-09 12:03:34.718373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.718392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.718398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.968 [2024-12-09 12:03:34.726395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.726413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.726420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.968 [2024-12-09 12:03:34.737453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.737472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.737478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.968 [2024-12-09 12:03:34.745861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.745879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.745885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.968 [2024-12-09 12:03:34.757094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.757116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.757122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.968 [2024-12-09 12:03:34.765448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.765467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.765473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.968 [2024-12-09 12:03:34.773296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.773315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.773321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.968 [2024-12-09 12:03:34.783781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.783800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.783806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.968 [2024-12-09 12:03:34.789710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.789727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.789734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.968 [2024-12-09 12:03:34.798572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.798590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.798597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.968 [2024-12-09 12:03:34.810681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.810699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.810705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.968 [2024-12-09 12:03:34.822505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.822523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.822529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.968 [2024-12-09 12:03:34.831467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.831485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.831491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.968 [2024-12-09 12:03:34.836104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.836121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.836128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.968 [2024-12-09 12:03:34.840825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.840843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.840849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.968 [2024-12-09 12:03:34.848268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:26.968 [2024-12-09 12:03:34.848286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.968 [2024-12-09 12:03:34.848292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.230 [2024-12-09 12:03:34.856388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.230 [2024-12-09 12:03:34.856406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.230 [2024-12-09 12:03:34.856413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.230 [2024-12-09 12:03:34.867381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.230 [2024-12-09 12:03:34.867399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.230 [2024-12-09 12:03:34.867406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.230 [2024-12-09 12:03:34.877470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.230 [2024-12-09 12:03:34.877489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.230 [2024-12-09 12:03:34.877496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.230 [2024-12-09 12:03:34.882919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.230 [2024-12-09 12:03:34.882937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.230 [2024-12-09 12:03:34.882943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.230 [2024-12-09 12:03:34.888362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.230 [2024-12-09 12:03:34.888381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.230 [2024-12-09 12:03:34.888388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.230 [2024-12-09 12:03:34.892954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.230 [2024-12-09 12:03:34.892972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.230 [2024-12-09 12:03:34.892982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.230 [2024-12-09 12:03:34.902841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.230 [2024-12-09 12:03:34.902859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.230 [2024-12-09 12:03:34.902865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.230 [2024-12-09 12:03:34.913251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.230 [2024-12-09 12:03:34.913269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.230 [2024-12-09 12:03:34.913275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.230 [2024-12-09 12:03:34.922746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.230 [2024-12-09 12:03:34.922764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.230 [2024-12-09 12:03:34.922771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.230 [2024-12-09 12:03:34.928050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.230 [2024-12-09 12:03:34.928068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.230 [2024-12-09 12:03:34.928074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.230 [2024-12-09 12:03:34.932078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.230 [2024-12-09 12:03:34.932097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.230 [2024-12-09 12:03:34.932103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.230 [2024-12-09 12:03:34.939378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.230 [2024-12-09 12:03:34.939395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.230 [2024-12-09 12:03:34.939401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.230 [2024-12-09 12:03:34.947120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.230 [2024-12-09 12:03:34.947139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.230 [2024-12-09 12:03:34.947147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.230 [2024-12-09 12:03:34.955475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.230 [2024-12-09 12:03:34.955493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.230 [2024-12-09 12:03:34.955500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.230 [2024-12-09 12:03:34.964380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.230 [2024-12-09 12:03:34.964402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.230 [2024-12-09 12:03:34.964408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.230 [2024-12-09 12:03:34.973994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.230 [2024-12-09 12:03:34.974013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.230 [2024-12-09 12:03:34.974020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.230 [2024-12-09 12:03:34.983099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.230 [2024-12-09 12:03:34.983118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.230 [2024-12-09 12:03:34.983124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.230 [2024-12-09 12:03:34.988234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.230 [2024-12-09 12:03:34.988252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.230 [2024-12-09 12:03:34.988259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.230 [2024-12-09 12:03:34.993285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.230 [2024-12-09 12:03:34.993305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.230 [2024-12-09 12:03:34.993312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.230 [2024-12-09 12:03:35.000955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.230 [2024-12-09 12:03:35.000974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.230 [2024-12-09 12:03:35.000980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.230 [2024-12-09 12:03:35.005711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.230 [2024-12-09 12:03:35.005730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.230 [2024-12-09 12:03:35.005736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.230 [2024-12-09 12:03:35.014065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.230 [2024-12-09 12:03:35.014083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.231 [2024-12-09 12:03:35.014089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.231 [2024-12-09 12:03:35.020717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.231 [2024-12-09 12:03:35.020735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.231 [2024-12-09 12:03:35.020744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.231 [2024-12-09 12:03:35.029351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.231 [2024-12-09 12:03:35.029369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.231 [2024-12-09 12:03:35.029375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.231 [2024-12-09 12:03:35.037699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.231 [2024-12-09 12:03:35.037718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.231 [2024-12-09 12:03:35.037724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.231 [2024-12-09 12:03:35.045681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.231 [2024-12-09 12:03:35.045699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.231 [2024-12-09 12:03:35.045707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.231 [2024-12-09 12:03:35.054702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.231 [2024-12-09 12:03:35.054720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.231 [2024-12-09 12:03:35.054727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.231 [2024-12-09 12:03:35.061936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.231 [2024-12-09 12:03:35.061954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.231 [2024-12-09 12:03:35.061960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.231 [2024-12-09 12:03:35.066411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.231 [2024-12-09 12:03:35.066430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.231 [2024-12-09 12:03:35.066436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.231 [2024-12-09 12:03:35.078068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.231 [2024-12-09 12:03:35.078087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.231 [2024-12-09 12:03:35.078093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.231 [2024-12-09 12:03:35.089610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.231 [2024-12-09 12:03:35.089628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.231 [2024-12-09 12:03:35.089635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.231 [2024-12-09 12:03:35.100622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.231 [2024-12-09 12:03:35.100649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.231 [2024-12-09 12:03:35.100655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.231 [2024-12-09 12:03:35.111240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.231 [2024-12-09 12:03:35.111258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.231 [2024-12-09 12:03:35.111264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.492 [2024-12-09 12:03:35.120200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.492 [2024-12-09 12:03:35.120219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.492 [2024-12-09 12:03:35.120225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.492 [2024-12-09 12:03:35.128298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.492 [2024-12-09 12:03:35.128317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.492 [2024-12-09 12:03:35.128323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.492 [2024-12-09 12:03:35.138717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.492 [2024-12-09 12:03:35.138743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.492 [2024-12-09 12:03:35.138749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.492 [2024-12-09 12:03:35.150781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.492 [2024-12-09 12:03:35.150800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.492 [2024-12-09 12:03:35.150806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.492 [2024-12-09 12:03:35.161741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.492 [2024-12-09 12:03:35.161759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.492 [2024-12-09 12:03:35.161765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.492 [2024-12-09 12:03:35.172681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.492 [2024-12-09 12:03:35.172699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.492 [2024-12-09 12:03:35.172706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.492 [2024-12-09 12:03:35.181750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.492 [2024-12-09 12:03:35.181768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.492 [2024-12-09 12:03:35.181774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.492 [2024-12-09 12:03:35.190756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.492 [2024-12-09 12:03:35.190775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.492 [2024-12-09 12:03:35.190781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.492 [2024-12-09 12:03:35.202299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.492 [2024-12-09 12:03:35.202317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.492 [2024-12-09 12:03:35.202324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.492 [2024-12-09 12:03:35.213181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.492 [2024-12-09 12:03:35.213198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.492 [2024-12-09 12:03:35.213204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.492 [2024-12-09 12:03:35.224461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.492 [2024-12-09 12:03:35.224479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.492 [2024-12-09 12:03:35.224485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.492 [2024-12-09 12:03:35.235778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.492 [2024-12-09 12:03:35.235796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.492 [2024-12-09 12:03:35.235803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.492 [2024-12-09 12:03:35.245800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.492 [2024-12-09 12:03:35.245819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.492 [2024-12-09 12:03:35.245825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.492 [2024-12-09 12:03:35.256220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.492 [2024-12-09 12:03:35.256238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.493 [2024-12-09 12:03:35.256245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.493 [2024-12-09 12:03:35.266531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.493 [2024-12-09 12:03:35.266550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.493 [2024-12-09 12:03:35.266556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.493 [2024-12-09 12:03:35.276177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.493 [2024-12-09 12:03:35.276196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.493 [2024-12-09 12:03:35.276206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.493 [2024-12-09 12:03:35.287151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.493 [2024-12-09 12:03:35.287170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.493 [2024-12-09 12:03:35.287176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.493 [2024-12-09 12:03:35.298874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.493 [2024-12-09 12:03:35.298893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.493 [2024-12-09 12:03:35.298899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.493 3434.00 IOPS, 429.25 MiB/s [2024-12-09T11:03:35.379Z] [2024-12-09 12:03:35.311561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.493 [2024-12-09 12:03:35.311579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.493 [2024-12-09 12:03:35.311586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.493 [2024-12-09 12:03:35.324128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.493 [2024-12-09 12:03:35.324146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.493 [2024-12-09 12:03:35.324152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.493 [2024-12-09 12:03:35.337079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.493 [2024-12-09 12:03:35.337098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.493 [2024-12-09 12:03:35.337105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.493 [2024-12-09 12:03:35.349618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.493 [2024-12-09 12:03:35.349641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.493 [2024-12-09 12:03:35.349648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.493 [2024-12-09 12:03:35.360547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.493 [2024-12-09 12:03:35.360566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.493 [2024-12-09 12:03:35.360572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.493 [2024-12-09 12:03:35.371538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.493 [2024-12-09 12:03:35.371557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.493 [2024-12-09 12:03:35.371563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.754 [2024-12-09 12:03:35.382764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.754 [2024-12-09 12:03:35.382786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.754 [2024-12-09 12:03:35.382793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.754 [2024-12-09 12:03:35.393646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.754 [2024-12-09 12:03:35.393664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.754 [2024-12-09 12:03:35.393671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.754 [2024-12-09 12:03:35.404516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.754 [2024-12-09 12:03:35.404534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.754 [2024-12-09 12:03:35.404541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.754 [2024-12-09 12:03:35.414452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.754 [2024-12-09 12:03:35.414469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.754 [2024-12-09 12:03:35.414476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.754 [2024-12-09 12:03:35.424354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.754 [2024-12-09 12:03:35.424374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.754 [2024-12-09 12:03:35.424381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.754 [2024-12-09 12:03:35.432967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.754 [2024-12-09 12:03:35.432985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.754 [2024-12-09 12:03:35.432991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.754 [2024-12-09 12:03:35.441599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.754 [2024-12-09 12:03:35.441618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.754 [2024-12-09 12:03:35.441624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.754 [2024-12-09 12:03:35.452886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.754 [2024-12-09 12:03:35.452904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.754 [2024-12-09 12:03:35.452910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.754 [2024-12-09 12:03:35.464790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.754 [2024-12-09 12:03:35.464809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.754 [2024-12-09 12:03:35.464815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.754 [2024-12-09 12:03:35.472787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.754 [2024-12-09 12:03:35.472806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.754 [2024-12-09 12:03:35.472812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.754 [2024-12-09 12:03:35.484075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.754 [2024-12-09 12:03:35.484094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.754 [2024-12-09 12:03:35.484100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.755 [2024-12-09 12:03:35.493778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.755 [2024-12-09 12:03:35.493797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.755 [2024-12-09 12:03:35.493804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.755 [2024-12-09 12:03:35.504476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.755 [2024-12-09 12:03:35.504495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.755 [2024-12-09 12:03:35.504501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.755 [2024-12-09 12:03:35.515201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.755 [2024-12-09 12:03:35.515220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.755 [2024-12-09 12:03:35.515227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.755 [2024-12-09 12:03:35.524947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.755 [2024-12-09 12:03:35.524965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.755 [2024-12-09 12:03:35.524971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.755 [2024-12-09 12:03:35.536062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.755 [2024-12-09 12:03:35.536080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.755 [2024-12-09 12:03:35.536086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.755 [2024-12-09 12:03:35.546069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.755 [2024-12-09 12:03:35.546088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.755 [2024-12-09 12:03:35.546094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.755 [2024-12-09 12:03:35.556711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.755 [2024-12-09 12:03:35.556735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.755 [2024-12-09 12:03:35.556741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.755 [2024-12-09 12:03:35.567253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.755 [2024-12-09 12:03:35.567271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.755 [2024-12-09 12:03:35.567277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.755 [2024-12-09 12:03:35.577955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.755 [2024-12-09 12:03:35.577974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.755 [2024-12-09 12:03:35.577980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.755 [2024-12-09 12:03:35.586617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.755 [2024-12-09 12:03:35.586636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.755 [2024-12-09 12:03:35.586647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.755 [2024-12-09 12:03:35.597696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.755 [2024-12-09 12:03:35.597714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.755 [2024-12-09 12:03:35.597721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.755 [2024-12-09 12:03:35.609321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.755 [2024-12-09 12:03:35.609339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.755 [2024-12-09 12:03:35.609346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.755 [2024-12-09 12:03:35.620061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.755 [2024-12-09 12:03:35.620079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.755 [2024-12-09 12:03:35.620085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.755 [2024-12-09 12:03:35.631857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:27.755 [2024-12-09 12:03:35.631876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.755 [2024-12-09 12:03:35.631883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.016 [2024-12-09 12:03:35.643647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.016 [2024-12-09 12:03:35.643666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-12-09 12:03:35.643672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.016 [2024-12-09 12:03:35.656540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.016 [2024-12-09 12:03:35.656558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-12-09 12:03:35.656564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.016 [2024-12-09 12:03:35.669340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.016 [2024-12-09 12:03:35.669358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-12-09 12:03:35.669364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.016 [2024-12-09 12:03:35.682424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.016 [2024-12-09 12:03:35.682442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-12-09 12:03:35.682449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.016 [2024-12-09 12:03:35.695499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.016 [2024-12-09 12:03:35.695516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-12-09 12:03:35.695522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.016 [2024-12-09 12:03:35.707814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.016 [2024-12-09 12:03:35.707833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-12-09 12:03:35.707839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.016 [2024-12-09 12:03:35.720138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.016 [2024-12-09 12:03:35.720157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-12-09 12:03:35.720164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.016 [2024-12-09 12:03:35.732941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.016 [2024-12-09 12:03:35.732959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-12-09 12:03:35.732965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.016 [2024-12-09 12:03:35.745819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.016 [2024-12-09 12:03:35.745837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-12-09 12:03:35.745844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.016 [2024-12-09 12:03:35.758197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.016 [2024-12-09 12:03:35.758215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-12-09 12:03:35.758224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.016 [2024-12-09 12:03:35.770006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.016 [2024-12-09 12:03:35.770024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-12-09 12:03:35.770031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.016 [2024-12-09 12:03:35.781187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.016 [2024-12-09 12:03:35.781205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-12-09 12:03:35.781212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.016 [2024-12-09 12:03:35.792285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.016 [2024-12-09 12:03:35.792303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-12-09 12:03:35.792309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.016 [2024-12-09 12:03:35.804328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.016 [2024-12-09 12:03:35.804346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-12-09 12:03:35.804352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.016 [2024-12-09 12:03:35.816765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.016 [2024-12-09 12:03:35.816783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-12-09 12:03:35.816789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.016 [2024-12-09 12:03:35.829067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.016 [2024-12-09 12:03:35.829084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-12-09 12:03:35.829090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.017 [2024-12-09 12:03:35.840648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.017 [2024-12-09 12:03:35.840666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.017 [2024-12-09 12:03:35.840672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.017 [2024-12-09 12:03:35.851260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.017 [2024-12-09 12:03:35.851279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.017 [2024-12-09 12:03:35.851285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.017 [2024-12-09 12:03:35.862894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.017 [2024-12-09 12:03:35.862916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.017 [2024-12-09 12:03:35.862922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.017 [2024-12-09 12:03:35.874766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.017 [2024-12-09 12:03:35.874784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.017 [2024-12-09 12:03:35.874791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.017 [2024-12-09 12:03:35.886966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.017 [2024-12-09 12:03:35.886983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.017 [2024-12-09 12:03:35.886990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.017 [2024-12-09 12:03:35.898908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.017 [2024-12-09 12:03:35.898926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.017 [2024-12-09 12:03:35.898933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.277 [2024-12-09 12:03:35.910596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.277 [2024-12-09 12:03:35.910614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.277 [2024-12-09 12:03:35.910621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.277 [2024-12-09 12:03:35.921514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.277 [2024-12-09 12:03:35.921532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:35.921538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.278 [2024-12-09 12:03:35.931473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.278 [2024-12-09 12:03:35.931492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:35.931498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.278 [2024-12-09 12:03:35.941134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.278 [2024-12-09 12:03:35.941153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:35.941159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.278 [2024-12-09 12:03:35.951748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.278 [2024-12-09 12:03:35.951766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:35.951773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.278 [2024-12-09 12:03:35.962334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.278 [2024-12-09 12:03:35.962353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:35.962359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.278 [2024-12-09 12:03:35.973352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.278 [2024-12-09 12:03:35.973370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:35.973377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.278 [2024-12-09 12:03:35.984646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.278 [2024-12-09 12:03:35.984664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:35.984670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.278 [2024-12-09 12:03:35.994956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.278 [2024-12-09 12:03:35.994975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:35.994982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.278 [2024-12-09 12:03:36.005815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.278 [2024-12-09 12:03:36.005834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:36.005840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.278 [2024-12-09 12:03:36.015204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.278 [2024-12-09 12:03:36.015223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:36.015229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.278 [2024-12-09 12:03:36.024524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.278 [2024-12-09 12:03:36.024542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:36.024548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.278 [2024-12-09 12:03:36.033688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.278 [2024-12-09 12:03:36.033706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:36.033712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.278 [2024-12-09 12:03:36.045003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.278 [2024-12-09 12:03:36.045022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:36.045031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.278 [2024-12-09 12:03:36.055693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.278 [2024-12-09 12:03:36.055711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:36.055718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.278 [2024-12-09 12:03:36.066545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.278 [2024-12-09 12:03:36.066563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:36.066569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.278 [2024-12-09 12:03:36.076383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.278 [2024-12-09 12:03:36.076402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:36.076408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.278 [2024-12-09 12:03:36.087029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.278 [2024-12-09 12:03:36.087047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:36.087053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.278 [2024-12-09 12:03:36.097907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.278 [2024-12-09 12:03:36.097925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:36.097931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.278 [2024-12-09 12:03:36.107145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.278 [2024-12-09 12:03:36.107162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:36.107169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.278 [2024-12-09 12:03:36.115839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.278 [2024-12-09 12:03:36.115856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:36.115863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.278 [2024-12-09 12:03:36.126738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.278 [2024-12-09 12:03:36.126755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:36.126762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.278 [2024-12-09 12:03:36.137531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.278 [2024-12-09 12:03:36.137548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:36.137555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.278 [2024-12-09 12:03:36.148279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.278 [2024-12-09 12:03:36.148296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:36.148302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.278 [2024-12-09 12:03:36.155356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.278 [2024-12-09 12:03:36.155374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-12-09 12:03:36.155380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.539 [2024-12-09 12:03:36.167341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.539 [2024-12-09 12:03:36.167359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.539 [2024-12-09 12:03:36.167365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.539 [2024-12-09 12:03:36.179840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.539 [2024-12-09 12:03:36.179858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.539 [2024-12-09 12:03:36.179864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.539 [2024-12-09 12:03:36.191462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.539 [2024-12-09 12:03:36.191480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.539 [2024-12-09 12:03:36.191486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.539 [2024-12-09 12:03:36.204057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.539 [2024-12-09 12:03:36.204075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.539 [2024-12-09 12:03:36.204081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.539 [2024-12-09 12:03:36.215207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.539 [2024-12-09 12:03:36.215224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.539 [2024-12-09 12:03:36.215230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.539 [2024-12-09 12:03:36.225352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.539 [2024-12-09 12:03:36.225370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.539 [2024-12-09 12:03:36.225379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.539 [2024-12-09 12:03:36.234870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.539 [2024-12-09 12:03:36.234887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.539 [2024-12-09 12:03:36.234893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.539 [2024-12-09 12:03:36.246725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.539 [2024-12-09 12:03:36.246741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.539 [2024-12-09 12:03:36.246748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.539 [2024-12-09 12:03:36.256741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.539 [2024-12-09 12:03:36.256758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-12-09 12:03:36.256765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.540 [2024-12-09 12:03:36.267566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.540 [2024-12-09 12:03:36.267584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-12-09 12:03:36.267590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.540 [2024-12-09 12:03:36.277588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.540 [2024-12-09 12:03:36.277605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-12-09 12:03:36.277611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.540 [2024-12-09 12:03:36.286077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.540 [2024-12-09 12:03:36.286093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-12-09 12:03:36.286100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.540 [2024-12-09 12:03:36.296580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.540 [2024-12-09 12:03:36.296597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-12-09 12:03:36.296603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.540 [2024-12-09 12:03:36.307093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x145a8c0) 00:28:28.540 [2024-12-09 12:03:36.307111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-12-09 12:03:36.307117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.540 3131.00 IOPS, 391.38 MiB/s 00:28:28.540 Latency(us) 00:28:28.540 [2024-12-09T11:03:36.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.540 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:28.540 nvme0n1 : 2.00 3138.32 392.29 0.00 0.00 5094.32 404.48 13544.11 00:28:28.540 [2024-12-09T11:03:36.426Z] =================================================================================================================== 00:28:28.540 [2024-12-09T11:03:36.426Z] Total : 3138.32 392.29 0.00 0.00 5094.32 404.48 13544.11 00:28:28.540 { 00:28:28.540 "results": [ 00:28:28.540 { 00:28:28.540 "job": "nvme0n1", 00:28:28.540 "core_mask": "0x2", 00:28:28.540 "workload": "randread", 00:28:28.540 "status": "finished", 00:28:28.540 "queue_depth": 16, 00:28:28.540 "io_size": 131072, 00:28:28.540 "runtime": 2.004894, 00:28:28.540 "iops": 3138.3205296639126, 00:28:28.540 "mibps": 392.29006620798907, 00:28:28.540 "io_failed": 0, 00:28:28.540 "io_timeout": 0, 00:28:28.540 "avg_latency_us": 5094.316321254502, 00:28:28.540 "min_latency_us": 404.48, 00:28:28.540 "max_latency_us": 13544.106666666667 00:28:28.540 } 00:28:28.540 ], 00:28:28.540 "core_count": 1 00:28:28.540 } 00:28:28.540 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:28.540 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:28.540 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:28.540 | .driver_specific 00:28:28.540 | .nvme_error 00:28:28.540 | .status_code 00:28:28.540 | .command_transient_transport_error' 00:28:28.540 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:28.800 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 203 > 0 )) 00:28:28.800 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 220338 00:28:28.800 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 220338 ']' 00:28:28.800 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 220338 00:28:28.800 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:28.800 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:28.800 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 220338 00:28:28.801 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:28.801 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:28.801 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 220338' 00:28:28.801 killing process with pid 220338 00:28:28.801 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 220338 00:28:28.801 Received shutdown signal, test time was about 2.000000 seconds 00:28:28.801 00:28:28.801 Latency(us) 00:28:28.801 [2024-12-09T11:03:36.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.801 [2024-12-09T11:03:36.687Z] =================================================================================================================== 00:28:28.801 [2024-12-09T11:03:36.687Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:28.801 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 220338 00:28:28.801 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:28.801 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:28.801 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:28.801 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:28.801 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:28.801 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=221019 00:28:28.801 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 221019 /var/tmp/bperf.sock 00:28:28.801 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 221019 ']' 00:28:28.801 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:28.801 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:28.801 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:29.062 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:29.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:29.062 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:29.062 12:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.062 [2024-12-09 12:03:36.734306] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:28:29.062 [2024-12-09 12:03:36.734367] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid221019 ] 00:28:29.062 [2024-12-09 12:03:36.818023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.062 [2024-12-09 12:03:36.847615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.003 12:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:30.003 12:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:30.003 12:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:30.003 12:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:30.003 12:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:30.003 12:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.003 12:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:30.003 12:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.003 12:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:30.003 12:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:30.264 nvme0n1 00:28:30.264 12:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:30.264 12:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.264 12:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:30.264 12:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.264 12:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:30.264 12:03:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:30.264 Running I/O for 2 seconds... 00:28:30.264 [2024-12-09 12:03:38.049837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee99d8 00:28:30.264 [2024-12-09 12:03:38.051036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.264 [2024-12-09 12:03:38.051063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:30.264 [2024-12-09 12:03:38.056973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eef270 00:28:30.264 [2024-12-09 12:03:38.057688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.264 [2024-12-09 12:03:38.057705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:30.264 [2024-12-09 12:03:38.065387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef0350 00:28:30.264 [2024-12-09 12:03:38.066104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.264 [2024-12-09 12:03:38.066121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:30.264 [2024-12-09 12:03:38.073887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef1430 00:28:30.264 [2024-12-09 12:03:38.074576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.264 [2024-12-09 12:03:38.074591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:30.264 [2024-12-09 12:03:38.082373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef2510 00:28:30.264 [2024-12-09 12:03:38.083072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.264 [2024-12-09 12:03:38.083088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:30.264 [2024-12-09 12:03:38.090856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef35f0 00:28:30.264 [2024-12-09 12:03:38.091549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.264 [2024-12-09 12:03:38.091564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:30.264 [2024-12-09 12:03:38.100400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef46d0 00:28:30.264 [2024-12-09 12:03:38.101569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.264 [2024-12-09 12:03:38.101585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:30.264 [2024-12-09 12:03:38.108381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efeb58 00:28:30.264 [2024-12-09 12:03:38.109214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.264 [2024-12-09 12:03:38.109230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:30.264 [2024-12-09 12:03:38.116784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efdeb0 00:28:30.264 [2024-12-09 12:03:38.117586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.264 [2024-12-09 12:03:38.117607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:30.264 [2024-12-09 12:03:38.125518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eeaef0 00:28:30.264 [2024-12-09 12:03:38.126194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.264 [2024-12-09 12:03:38.126211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:30.264 [2024-12-09 12:03:38.133845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eebb98 00:28:30.264 [2024-12-09 12:03:38.134565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.264 [2024-12-09 12:03:38.134581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:30.264 [2024-12-09 12:03:38.142291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eeaef0 00:28:30.264 [2024-12-09 12:03:38.143026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.264 [2024-12-09 12:03:38.143042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:30.525 [2024-12-09 12:03:38.151112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eebb98 00:28:30.525 [2024-12-09 12:03:38.152054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.525 [2024-12-09 12:03:38.152070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:30.525 [2024-12-09 12:03:38.159466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eeaab8 00:28:30.525 [2024-12-09 12:03:38.160385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.525 [2024-12-09 12:03:38.160401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:30.525 [2024-12-09 12:03:38.167892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee99d8 00:28:30.525 [2024-12-09 12:03:38.168781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.168797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.176315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef0bc0 00:28:30.526 [2024-12-09 12:03:38.177239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.177255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.184754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eefae0 00:28:30.526 [2024-12-09 12:03:38.185642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.185658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.193215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eeea00 00:28:30.526 [2024-12-09 12:03:38.194146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.194162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.201667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eed920 00:28:30.526 [2024-12-09 12:03:38.202582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.202598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.210097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef6cc8 00:28:30.526 [2024-12-09 12:03:38.211022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.211038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.217941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016edfdc0 00:28:30.526 [2024-12-09 12:03:38.218850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.218865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.227222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef5be8 00:28:30.526 [2024-12-09 12:03:38.228261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.228277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.235690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ede8a8 00:28:30.526 [2024-12-09 12:03:38.236697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.236713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.244129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef5be8 00:28:30.526 [2024-12-09 12:03:38.245172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.245188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.252590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ede8a8 00:28:30.526 [2024-12-09 12:03:38.253635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.253654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.261026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef5be8 00:28:30.526 [2024-12-09 12:03:38.262062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.262078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.269492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ede8a8 00:28:30.526 [2024-12-09 12:03:38.270544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.270560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.277944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef5be8 00:28:30.526 [2024-12-09 12:03:38.278975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.278991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.286406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ede8a8 00:28:30.526 [2024-12-09 12:03:38.287410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.287426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.294887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef5be8 00:28:30.526 [2024-12-09 12:03:38.295905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.295921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.303338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ede8a8 00:28:30.526 [2024-12-09 12:03:38.304379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.304395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.311802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef5be8 00:28:30.526 [2024-12-09 12:03:38.312809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.312825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.320238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ede8a8 00:28:30.526 [2024-12-09 12:03:38.321274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.321290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.328706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef5be8 00:28:30.526 [2024-12-09 12:03:38.329742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.329757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.337176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ede8a8 00:28:30.526 [2024-12-09 12:03:38.338215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.338233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.345612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef5be8 00:28:30.526 [2024-12-09 12:03:38.346649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.346665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.354114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ede8a8 00:28:30.526 [2024-12-09 12:03:38.355152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.355168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.362730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef5be8 00:28:30.526 [2024-12-09 12:03:38.363776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.363792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.371191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ede8a8 00:28:30.526 [2024-12-09 12:03:38.372231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.372247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.379832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef4298 00:28:30.526 [2024-12-09 12:03:38.380883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.380899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.388269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef9f68 00:28:30.526 [2024-12-09 12:03:38.389332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.526 [2024-12-09 12:03:38.389348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.526 [2024-12-09 12:03:38.396710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee5220 00:28:30.526 [2024-12-09 12:03:38.397759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.527 [2024-12-09 12:03:38.397775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.527 [2024-12-09 12:03:38.405133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee1b48 00:28:30.527 [2024-12-09 12:03:38.406199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.527 [2024-12-09 12:03:38.406215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.788 [2024-12-09 12:03:38.413569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef4f40 00:28:30.788 [2024-12-09 12:03:38.414602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.788 [2024-12-09 12:03:38.414618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.788 [2024-12-09 12:03:38.422009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efcdd0 00:28:30.788 [2024-12-09 12:03:38.423055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.788 [2024-12-09 12:03:38.423071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.788 [2024-12-09 12:03:38.430447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016edece0 00:28:30.788 [2024-12-09 12:03:38.431492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.788 [2024-12-09 12:03:38.431508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.788 [2024-12-09 12:03:38.438880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eddc00 00:28:30.788 [2024-12-09 12:03:38.439902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.788 [2024-12-09 12:03:38.439918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.788 [2024-12-09 12:03:38.447289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef5be8 00:28:30.788 [2024-12-09 12:03:38.448292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.788 [2024-12-09 12:03:38.448308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.788 [2024-12-09 12:03:38.455702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee8088 00:28:30.788 [2024-12-09 12:03:38.456707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.788 [2024-12-09 12:03:38.456723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.788 [2024-12-09 12:03:38.464138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee6fa8 00:28:30.788 [2024-12-09 12:03:38.465179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.788 [2024-12-09 12:03:38.465195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.788 [2024-12-09 12:03:38.472579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee5ec8 00:28:30.788 [2024-12-09 12:03:38.473628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.788 [2024-12-09 12:03:38.473647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.788 [2024-12-09 12:03:38.481023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efc128 00:28:30.788 [2024-12-09 12:03:38.482077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.788 [2024-12-09 12:03:38.482093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.788 [2024-12-09 12:03:38.489449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efb048 00:28:30.788 [2024-12-09 12:03:38.490494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.788 [2024-12-09 12:03:38.490510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.788 [2024-12-09 12:03:38.497882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee4140 00:28:30.788 [2024-12-09 12:03:38.498938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.788 [2024-12-09 12:03:38.498954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.788 [2024-12-09 12:03:38.506313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef2510 00:28:30.788 [2024-12-09 12:03:38.507363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.788 [2024-12-09 12:03:38.507379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.788 [2024-12-09 12:03:38.514764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef35f0 00:28:30.788 [2024-12-09 12:03:38.515822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.788 [2024-12-09 12:03:38.515838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.788 [2024-12-09 12:03:38.523223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efa3a0 00:28:30.788 [2024-12-09 12:03:38.524289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.788 [2024-12-09 12:03:38.524305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.788 [2024-12-09 12:03:38.531688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef92c0 00:28:30.788 [2024-12-09 12:03:38.532706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.788 [2024-12-09 12:03:38.532723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.788 [2024-12-09 12:03:38.540125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee4de8 00:28:30.788 [2024-12-09 12:03:38.541184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.788 [2024-12-09 12:03:38.541200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.789 [2024-12-09 12:03:38.548569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee1710 00:28:30.789 [2024-12-09 12:03:38.549628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.789 [2024-12-09 12:03:38.549647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.789 [2024-12-09 12:03:38.557010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef4b08 00:28:30.789 [2024-12-09 12:03:38.558073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.789 [2024-12-09 12:03:38.558093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.789 [2024-12-09 12:03:38.565457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016edf988 00:28:30.789 [2024-12-09 12:03:38.566508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.789 [2024-12-09 12:03:38.566524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.789 [2024-12-09 12:03:38.573900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ede8a8 00:28:30.789 [2024-12-09 12:03:38.574952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.789 [2024-12-09 12:03:38.574968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.789 [2024-12-09 12:03:38.582396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efc998 00:28:30.789 [2024-12-09 12:03:38.583371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.789 [2024-12-09 12:03:38.583388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.789 [2024-12-09 12:03:38.590921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee84c0 00:28:30.789 [2024-12-09 12:03:38.591970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.789 [2024-12-09 12:03:38.591987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.789 [2024-12-09 12:03:38.599386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee73e0 00:28:30.789 [2024-12-09 12:03:38.600438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.789 [2024-12-09 12:03:38.600455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.789 [2024-12-09 12:03:38.607106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eeaab8 00:28:30.789 [2024-12-09 12:03:38.608454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.789 [2024-12-09 12:03:38.608470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:30.789 [2024-12-09 12:03:38.614942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef8618 00:28:30.789 [2024-12-09 12:03:38.615618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.789 [2024-12-09 12:03:38.615634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:30.789 [2024-12-09 12:03:38.623393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef4b08 00:28:30.789 [2024-12-09 12:03:38.624075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.789 [2024-12-09 12:03:38.624091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:30.789 [2024-12-09 12:03:38.631840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eeaab8 00:28:30.789 [2024-12-09 12:03:38.632495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.789 [2024-12-09 12:03:38.632510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:30.789 [2024-12-09 12:03:38.640297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef8618 00:28:30.789 [2024-12-09 12:03:38.640975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.789 [2024-12-09 12:03:38.640991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:30.789 [2024-12-09 12:03:38.648755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef4b08 00:28:30.789 [2024-12-09 12:03:38.649433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.789 [2024-12-09 12:03:38.649448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:30.789 [2024-12-09 12:03:38.657227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eeaab8 00:28:30.789 [2024-12-09 12:03:38.657881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.789 [2024-12-09 12:03:38.657897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:30.789 [2024-12-09 12:03:38.665700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef8618 00:28:30.789 [2024-12-09 12:03:38.666373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.789 [2024-12-09 12:03:38.666388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.674141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef4b08 00:28:31.050 [2024-12-09 12:03:38.674812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.674828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.682586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eeaab8 00:28:31.050 [2024-12-09 12:03:38.683269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.683285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.691045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef8618 00:28:31.050 [2024-12-09 12:03:38.691739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.691755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.699492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef4f40 00:28:31.050 [2024-12-09 12:03:38.700171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.700187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.707957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eebb98 00:28:31.050 [2024-12-09 12:03:38.708632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.708652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.716407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef8618 00:28:31.050 [2024-12-09 12:03:38.717113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.717129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.724875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef4f40 00:28:31.050 [2024-12-09 12:03:38.725556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.725572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.733324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eebb98 00:28:31.050 [2024-12-09 12:03:38.734006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.734022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.741808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef8618 00:28:31.050 [2024-12-09 12:03:38.742485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.742501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.750272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef4f40 00:28:31.050 [2024-12-09 12:03:38.750952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.750968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.758716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eebb98 00:28:31.050 [2024-12-09 12:03:38.759390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.759406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.767180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef8618 00:28:31.050 [2024-12-09 12:03:38.767868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.767884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.775623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef4f40 00:28:31.050 [2024-12-09 12:03:38.776308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.776326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.784269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef7538 00:28:31.050 [2024-12-09 12:03:38.784919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.784935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.792740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eeff18 00:28:31.050 [2024-12-09 12:03:38.793427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.793443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.801182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef1868 00:28:31.050 [2024-12-09 12:03:38.801888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.801904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.809620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee1b48 00:28:31.050 [2024-12-09 12:03:38.810171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.810188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.818358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efeb58 00:28:31.050 [2024-12-09 12:03:38.819146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.819163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.827957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef9b30 00:28:31.050 [2024-12-09 12:03:38.829116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.829131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.836552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eedd58 00:28:31.050 [2024-12-09 12:03:38.837698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.837714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.843569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016edf118 00:28:31.050 [2024-12-09 12:03:38.844228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.844244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.851935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efe2e8 00:28:31.050 [2024-12-09 12:03:38.852585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.852600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.860399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eeaab8 00:28:31.050 [2024-12-09 12:03:38.861046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.861062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.868856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efcdd0 00:28:31.050 [2024-12-09 12:03:38.869494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.869510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.877303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef4f40 00:28:31.050 [2024-12-09 12:03:38.877953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.877968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.886836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee3498 00:28:31.050 [2024-12-09 12:03:38.887966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.887981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:31.050 [2024-12-09 12:03:38.894391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eee190 00:28:31.050 [2024-12-09 12:03:38.894833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.050 [2024-12-09 12:03:38.894849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:31.051 [2024-12-09 12:03:38.903115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee0630 00:28:31.051 [2024-12-09 12:03:38.903933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.051 [2024-12-09 12:03:38.903949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:31.051 [2024-12-09 12:03:38.911538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efac10 00:28:31.051 [2024-12-09 12:03:38.912342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.051 [2024-12-09 12:03:38.912358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:31.051 [2024-12-09 12:03:38.919986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eecc78 00:28:31.051 [2024-12-09 12:03:38.920791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.051 [2024-12-09 12:03:38.920807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:31.051 [2024-12-09 12:03:38.928426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef81e0 00:28:31.051 [2024-12-09 12:03:38.929239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.051 [2024-12-09 12:03:38.929255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:31.311 [2024-12-09 12:03:38.937966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efa7d8 00:28:31.311 [2024-12-09 12:03:38.939245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.311 [2024-12-09 12:03:38.939261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:31.311 [2024-12-09 12:03:38.945463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef6890 00:28:31.311 [2024-12-09 12:03:38.946167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.311 [2024-12-09 12:03:38.946182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:31.311 [2024-12-09 12:03:38.954270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef7da8 00:28:31.311 [2024-12-09 12:03:38.955211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.311 [2024-12-09 12:03:38.955227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:31.311 [2024-12-09 12:03:38.962962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee95a0 00:28:31.311 [2024-12-09 12:03:38.963708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.311 [2024-12-09 12:03:38.963724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:31.311 [2024-12-09 12:03:38.971276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef3e60 00:28:31.311 [2024-12-09 12:03:38.972012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.311 [2024-12-09 12:03:38.972028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:31.311 [2024-12-09 12:03:38.980105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee4de8 00:28:31.311 [2024-12-09 12:03:38.981152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.311 [2024-12-09 12:03:38.981169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:31.311 [2024-12-09 12:03:38.988546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eed4e8 00:28:31.311 [2024-12-09 12:03:38.989593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.311 [2024-12-09 12:03:38.989609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:31.311 [2024-12-09 12:03:38.996985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efd640 00:28:31.311 [2024-12-09 12:03:38.998025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.311 [2024-12-09 12:03:38.998043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:31.311 [2024-12-09 12:03:39.005417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efeb58 00:28:31.311 [2024-12-09 12:03:39.006472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.311 [2024-12-09 12:03:39.006489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:31.311 [2024-12-09 12:03:39.014949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef7970 00:28:31.311 [2024-12-09 12:03:39.016455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.311 [2024-12-09 12:03:39.016472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:31.311 [2024-12-09 12:03:39.021016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee01f8 00:28:31.311 [2024-12-09 12:03:39.021598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.311 [2024-12-09 12:03:39.021614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:31.311 [2024-12-09 12:03:39.030626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ede8a8 00:28:31.311 [2024-12-09 12:03:39.031342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.311 [2024-12-09 12:03:39.031359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:31.311 [2024-12-09 12:03:39.038473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016edf988 00:28:31.311 [2024-12-09 12:03:39.040167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.311 [2024-12-09 12:03:39.040183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:31.311 29986.00 IOPS, 117.13 MiB/s [2024-12-09T11:03:39.197Z] [2024-12-09 12:03:39.048162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eebb98 00:28:31.311 [2024-12-09 12:03:39.049119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.312 [2024-12-09 12:03:39.049135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:31.312 [2024-12-09 12:03:39.056853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016edfdc0 00:28:31.312 [2024-12-09 12:03:39.057904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.312 [2024-12-09 12:03:39.057920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:31.312 [2024-12-09 12:03:39.066155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee7818 00:28:31.312 [2024-12-09 12:03:39.067324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.312 [2024-12-09 12:03:39.067340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:31.312 [2024-12-09 12:03:39.073227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efb8b8 00:28:31.312 [2024-12-09 12:03:39.073887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.312 [2024-12-09 12:03:39.073903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:31.312 [2024-12-09 12:03:39.081596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efe2e8 00:28:31.312 [2024-12-09 12:03:39.082294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.312 [2024-12-09 12:03:39.082310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:31.312 [2024-12-09 12:03:39.090044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efc128 00:28:31.312 [2024-12-09 12:03:39.090709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.312 [2024-12-09 12:03:39.090726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:31.312 [2024-12-09 12:03:39.098800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee5220 00:28:31.312 [2024-12-09 12:03:39.099604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.312 [2024-12-09 12:03:39.099621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:31.312 [2024-12-09 12:03:39.107446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eecc78 00:28:31.312 [2024-12-09 12:03:39.108259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.312 [2024-12-09 12:03:39.108275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:31.312 [2024-12-09 12:03:39.115904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efac10 00:28:31.312 [2024-12-09 12:03:39.116708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.312 [2024-12-09 12:03:39.116724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:31.312 [2024-12-09 12:03:39.124346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eef6a8 00:28:31.312 [2024-12-09 12:03:39.125159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.312 [2024-12-09 12:03:39.125176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:31.312 [2024-12-09 12:03:39.132778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016edf550 00:28:31.312 [2024-12-09 12:03:39.133588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.312 [2024-12-09 12:03:39.133604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:31.312 [2024-12-09 12:03:39.141219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eec408 00:28:31.312 [2024-12-09 12:03:39.142030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.312 [2024-12-09 12:03:39.142046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:31.312 [2024-12-09 12:03:39.149660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef92c0 00:28:31.312 [2024-12-09 12:03:39.150474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.312 [2024-12-09 12:03:39.150490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:31.312 [2024-12-09 12:03:39.158129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef81e0 00:28:31.312 [2024-12-09 12:03:39.158947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.312 [2024-12-09 12:03:39.158963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:31.312 [2024-12-09 12:03:39.166619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eedd58 00:28:31.312 [2024-12-09 12:03:39.167434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.312 [2024-12-09 12:03:39.167450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:31.312 [2024-12-09 12:03:39.175059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef96f8 00:28:31.312 [2024-12-09 12:03:39.175897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.312 [2024-12-09 12:03:39.175914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:31.312 [2024-12-09 12:03:39.183478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eebb98 00:28:31.312 [2024-12-09 12:03:39.184307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.312 [2024-12-09 12:03:39.184323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:31.312 [2024-12-09 12:03:39.191916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eeaab8 00:28:31.312 [2024-12-09 12:03:39.192704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.312 [2024-12-09 12:03:39.192721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.200375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eea248 00:28:31.573 [2024-12-09 12:03:39.201057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.201073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.209102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef1868 00:28:31.573 [2024-12-09 12:03:39.210031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.210046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.217565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efc998 00:28:31.573 [2024-12-09 12:03:39.218491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.218510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.226203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efc128 00:28:31.573 [2024-12-09 12:03:39.227138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.227154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.234620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee01f8 00:28:31.573 [2024-12-09 12:03:39.235572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.235588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.243068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016edf118 00:28:31.573 [2024-12-09 12:03:39.243997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.244013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.251529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee1b48 00:28:31.573 [2024-12-09 12:03:39.252461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.252477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.259982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efa7d8 00:28:31.573 [2024-12-09 12:03:39.260914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.260930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.268420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee4140 00:28:31.573 [2024-12-09 12:03:39.269357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.269373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.276843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee0630 00:28:31.573 [2024-12-09 12:03:39.277775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.277791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.285265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eebfd0 00:28:31.573 [2024-12-09 12:03:39.286197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.286213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.293744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef7100 00:28:31.573 [2024-12-09 12:03:39.294677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.294693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.302200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef9b30 00:28:31.573 [2024-12-09 12:03:39.303131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.303147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.310649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef2948 00:28:31.573 [2024-12-09 12:03:39.311579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.311595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.319073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef20d8 00:28:31.573 [2024-12-09 12:03:39.319995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.320011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.327498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ede8a8 00:28:31.573 [2024-12-09 12:03:39.328429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.328445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.335944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef4f40 00:28:31.573 [2024-12-09 12:03:39.336856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.336872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.344394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef3e60 00:28:31.573 [2024-12-09 12:03:39.345340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.345356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.352853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef1868 00:28:31.573 [2024-12-09 12:03:39.353776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.353792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.361460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee3498 00:28:31.573 [2024-12-09 12:03:39.362392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.362409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.369889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee0ea0 00:28:31.573 [2024-12-09 12:03:39.370852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.370868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.378311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee95a0 00:28:31.573 [2024-12-09 12:03:39.379247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.379263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.386748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee1f80 00:28:31.573 [2024-12-09 12:03:39.387678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.387694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.395190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef1ca0 00:28:31.573 [2024-12-09 12:03:39.396106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.396122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.403650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efa3a0 00:28:31.573 [2024-12-09 12:03:39.404583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.573 [2024-12-09 12:03:39.404599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.573 [2024-12-09 12:03:39.412094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efe2e8 00:28:31.574 [2024-12-09 12:03:39.413029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.574 [2024-12-09 12:03:39.413044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.574 [2024-12-09 12:03:39.420526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eed0b0 00:28:31.574 [2024-12-09 12:03:39.421466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.574 [2024-12-09 12:03:39.421482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.574 [2024-12-09 12:03:39.428978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efda78 00:28:31.574 [2024-12-09 12:03:39.429904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.574 [2024-12-09 12:03:39.429920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.574 [2024-12-09 12:03:39.437410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef7538 00:28:31.574 [2024-12-09 12:03:39.438363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.574 [2024-12-09 12:03:39.438381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.574 [2024-12-09 12:03:39.445861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef2510 00:28:31.574 [2024-12-09 12:03:39.446798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.574 [2024-12-09 12:03:39.446814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.574 [2024-12-09 12:03:39.454283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efcdd0 00:28:31.574 [2024-12-09 12:03:39.455209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.574 [2024-12-09 12:03:39.455225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.462720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef0bc0 00:28:31.835 [2024-12-09 12:03:39.463650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.463667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.471148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef4b08 00:28:31.835 [2024-12-09 12:03:39.472100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.472117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.479597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef35f0 00:28:31.835 [2024-12-09 12:03:39.480542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.480558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.488072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee6b70 00:28:31.835 [2024-12-09 12:03:39.489004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.489020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.496509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee3060 00:28:31.835 [2024-12-09 12:03:39.497438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.497454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.504930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efc128 00:28:31.835 [2024-12-09 12:03:39.505831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.505847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.513351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee01f8 00:28:31.835 [2024-12-09 12:03:39.514278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.514295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.521796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016edf118 00:28:31.835 [2024-12-09 12:03:39.522710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.522726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.530244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee1b48 00:28:31.835 [2024-12-09 12:03:39.531189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.531205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.538673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efa7d8 00:28:31.835 [2024-12-09 12:03:39.539607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.539623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.547188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee4140 00:28:31.835 [2024-12-09 12:03:39.548091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.548107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.555624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee0630 00:28:31.835 [2024-12-09 12:03:39.556553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.556569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.564063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eebfd0 00:28:31.835 [2024-12-09 12:03:39.564993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.565010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.572499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef7100 00:28:31.835 [2024-12-09 12:03:39.573435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.573451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.580959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef9b30 00:28:31.835 [2024-12-09 12:03:39.581875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.581891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.589381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef2948 00:28:31.835 [2024-12-09 12:03:39.590276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.590291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.598135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee3d08 00:28:31.835 [2024-12-09 12:03:39.599174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.599190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.606560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eeaef0 00:28:31.835 [2024-12-09 12:03:39.607568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.607584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.615009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee3d08 00:28:31.835 [2024-12-09 12:03:39.616047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.616063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.623476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eeaef0 00:28:31.835 [2024-12-09 12:03:39.624516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.624532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.631947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee3d08 00:28:31.835 [2024-12-09 12:03:39.632982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.632998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.640409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eeaef0 00:28:31.835 [2024-12-09 12:03:39.641406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.641422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.648848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee3d08 00:28:31.835 [2024-12-09 12:03:39.649841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.649857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.657311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eeaef0 00:28:31.835 [2024-12-09 12:03:39.658355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.835 [2024-12-09 12:03:39.658374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.835 [2024-12-09 12:03:39.665908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eee5c8 00:28:31.835 [2024-12-09 12:03:39.666951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.836 [2024-12-09 12:03:39.666967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:31.836 [2024-12-09 12:03:39.674341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef57b0 00:28:31.836 [2024-12-09 12:03:39.675391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.836 [2024-12-09 12:03:39.675407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:31.836 [2024-12-09 12:03:39.682772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee6fa8 00:28:31.836 [2024-12-09 12:03:39.683811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.836 [2024-12-09 12:03:39.683827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:31.836 [2024-12-09 12:03:39.691210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eeee38 00:28:31.836 [2024-12-09 12:03:39.692254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.836 [2024-12-09 12:03:39.692270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:31.836 [2024-12-09 12:03:39.699636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef6020 00:28:31.836 [2024-12-09 12:03:39.700677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.836 [2024-12-09 12:03:39.700693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:31.836 [2024-12-09 12:03:39.708094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efb8b8 00:28:31.836 [2024-12-09 12:03:39.709135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.836 [2024-12-09 12:03:39.709151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:31.836 [2024-12-09 12:03:39.716532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eeb328 00:28:31.836 [2024-12-09 12:03:39.717577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.836 [2024-12-09 12:03:39.717593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.098 [2024-12-09 12:03:39.724962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eea680 00:28:32.098 [2024-12-09 12:03:39.726016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.098 [2024-12-09 12:03:39.726032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.098 [2024-12-09 12:03:39.733382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee3d08 00:28:32.098 [2024-12-09 12:03:39.734445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.098 [2024-12-09 12:03:39.734461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.098 [2024-12-09 12:03:39.741823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eecc78 00:28:32.098 [2024-12-09 12:03:39.742875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.098 [2024-12-09 12:03:39.742892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.098 [2024-12-09 12:03:39.750259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016edf550 00:28:32.098 [2024-12-09 12:03:39.751321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.098 [2024-12-09 12:03:39.751337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.098 [2024-12-09 12:03:39.758721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eec408 00:28:32.098 [2024-12-09 12:03:39.759768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.098 [2024-12-09 12:03:39.759784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.098 [2024-12-09 12:03:39.767149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef92c0 00:28:32.098 [2024-12-09 12:03:39.768207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.098 [2024-12-09 12:03:39.768223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.098 [2024-12-09 12:03:39.775569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee4578 00:28:32.098 [2024-12-09 12:03:39.776616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.098 [2024-12-09 12:03:39.776632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.098 [2024-12-09 12:03:39.783988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee12d8 00:28:32.098 [2024-12-09 12:03:39.785032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.098 [2024-12-09 12:03:39.785048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.098 [2024-12-09 12:03:39.792439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef31b8 00:28:32.098 [2024-12-09 12:03:39.793496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.098 [2024-12-09 12:03:39.793512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.098 [2024-12-09 12:03:39.800888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efc998 00:28:32.098 [2024-12-09 12:03:39.801937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.098 [2024-12-09 12:03:39.801952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.098 [2024-12-09 12:03:39.809347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee8088 00:28:32.098 [2024-12-09 12:03:39.810397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.098 [2024-12-09 12:03:39.810413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.098 [2024-12-09 12:03:39.817791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee5a90 00:28:32.098 [2024-12-09 12:03:39.818838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.098 [2024-12-09 12:03:39.818855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.098 [2024-12-09 12:03:39.826224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eee190 00:28:32.098 [2024-12-09 12:03:39.827273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.098 [2024-12-09 12:03:39.827290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.098 [2024-12-09 12:03:39.834680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef6458 00:28:32.098 [2024-12-09 12:03:39.835708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.098 [2024-12-09 12:03:39.835724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.098 [2024-12-09 12:03:39.843117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef8a50 00:28:32.098 [2024-12-09 12:03:39.844169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.098 [2024-12-09 12:03:39.844185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.098 [2024-12-09 12:03:39.851553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efbcf0 00:28:32.098 [2024-12-09 12:03:39.852596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.098 [2024-12-09 12:03:39.852612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.098 [2024-12-09 12:03:39.859989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eeaef0 00:28:32.098 [2024-12-09 12:03:39.861036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.098 [2024-12-09 12:03:39.861052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.098 [2024-12-09 12:03:39.868404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eff3c8 00:28:32.098 [2024-12-09 12:03:39.869445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.098 [2024-12-09 12:03:39.869461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.098 [2024-12-09 12:03:39.876820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee6300 00:28:32.098 [2024-12-09 12:03:39.877865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.098 [2024-12-09 12:03:39.877884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.098 [2024-12-09 12:03:39.885241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef2d80 00:28:32.098 [2024-12-09 12:03:39.886291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.098 [2024-12-09 12:03:39.886307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.098 [2024-12-09 12:03:39.893680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efd640 00:28:32.098 [2024-12-09 12:03:39.894704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.098 [2024-12-09 12:03:39.894720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.098 [2024-12-09 12:03:39.902121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eed4e8 00:28:32.098 [2024-12-09 12:03:39.903164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.098 [2024-12-09 12:03:39.903181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.098 [2024-12-09 12:03:39.910558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efdeb0 00:28:32.098 [2024-12-09 12:03:39.911611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.098 [2024-12-09 12:03:39.911627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.099 [2024-12-09 12:03:39.918984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efeb58 00:28:32.099 [2024-12-09 12:03:39.920047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.099 [2024-12-09 12:03:39.920064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.099 [2024-12-09 12:03:39.927400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef9f68 00:28:32.099 [2024-12-09 12:03:39.928459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.099 [2024-12-09 12:03:39.928475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.099 [2024-12-09 12:03:39.935863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee7c50 00:28:32.099 [2024-12-09 12:03:39.936909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.099 [2024-12-09 12:03:39.936925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.099 [2024-12-09 12:03:39.944293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eee5c8 00:28:32.099 [2024-12-09 12:03:39.945342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.099 [2024-12-09 12:03:39.945358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.099 [2024-12-09 12:03:39.952734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef57b0 00:28:32.099 [2024-12-09 12:03:39.953742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.099 [2024-12-09 12:03:39.953758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.099 [2024-12-09 12:03:39.961168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee6fa8 00:28:32.099 [2024-12-09 12:03:39.962217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.099 [2024-12-09 12:03:39.962233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.099 [2024-12-09 12:03:39.969583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eeee38 00:28:32.099 [2024-12-09 12:03:39.970632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.099 [2024-12-09 12:03:39.970651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.099 [2024-12-09 12:03:39.978048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ef6020 00:28:32.099 [2024-12-09 12:03:39.979100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.099 [2024-12-09 12:03:39.979115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.359 [2024-12-09 12:03:39.986484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efb8b8 00:28:32.359 [2024-12-09 12:03:39.987529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.359 [2024-12-09 12:03:39.987545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.359 [2024-12-09 12:03:39.995020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eeb328 00:28:32.359 [2024-12-09 12:03:39.996083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.359 [2024-12-09 12:03:39.996100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.360 [2024-12-09 12:03:40.005041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eea680 00:28:32.360 [2024-12-09 12:03:40.006417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.360 [2024-12-09 12:03:40.006433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.360 [2024-12-09 12:03:40.011045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016ee8088 00:28:32.360 [2024-12-09 12:03:40.011603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.360 [2024-12-09 12:03:40.011619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:32.360 [2024-12-09 12:03:40.020705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efc998 00:28:32.360 [2024-12-09 12:03:40.021720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.360 [2024-12-09 12:03:40.021735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:32.360 [2024-12-09 12:03:40.028249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016eeb328 00:28:32.360 [2024-12-09 12:03:40.028792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.360 [2024-12-09 12:03:40.028808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:32.360 [2024-12-09 12:03:40.037093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc8eb0) with pdu=0x200016efa7d8 00:28:32.360 [2024-12-09 12:03:40.037852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.360 [2024-12-09 12:03:40.037868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:32.360 30116.50 IOPS, 117.64 MiB/s 00:28:32.360 Latency(us) 00:28:32.360 [2024-12-09T11:03:40.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.360 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:32.360 nvme0n1 : 2.00 30111.26 117.62 0.00 0.00 4245.16 2129.92 11851.09 00:28:32.360 [2024-12-09T11:03:40.246Z] =================================================================================================================== 00:28:32.360 [2024-12-09T11:03:40.246Z] Total : 30111.26 117.62 0.00 0.00 4245.16 2129.92 11851.09 00:28:32.360 { 00:28:32.360 "results": [ 00:28:32.360 { 00:28:32.360 "job": "nvme0n1", 00:28:32.360 "core_mask": "0x2", 00:28:32.360 "workload": "randwrite", 00:28:32.360 "status": "finished", 00:28:32.360 "queue_depth": 128, 00:28:32.360 "io_size": 4096, 00:28:32.360 "runtime": 2.004599, 00:28:32.360 "iops": 30111.259159562585, 00:28:32.360 "mibps": 117.62210609204135, 00:28:32.360 "io_failed": 0, 00:28:32.360 "io_timeout": 0, 00:28:32.360 "avg_latency_us": 4245.156689473887, 00:28:32.360 "min_latency_us": 2129.92, 00:28:32.360 "max_latency_us": 11851.093333333334 00:28:32.360 } 00:28:32.360 ], 00:28:32.360 "core_count": 1 00:28:32.360 } 00:28:32.360 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:32.360 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:32.360 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:32.360 | .driver_specific 00:28:32.360 | .nvme_error 00:28:32.360 | .status_code 00:28:32.360 | .command_transient_transport_error' 00:28:32.360 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 236 > 0 )) 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 221019 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 221019 ']' 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 221019 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 221019 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 221019' 00:28:32.621 killing process with pid 221019 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 221019 00:28:32.621 Received shutdown signal, test time was about 2.000000 seconds 00:28:32.621 00:28:32.621 Latency(us) 00:28:32.621 [2024-12-09T11:03:40.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.621 [2024-12-09T11:03:40.507Z] =================================================================================================================== 00:28:32.621 [2024-12-09T11:03:40.507Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 221019 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=221780 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 221780 /var/tmp/bperf.sock 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 221780 ']' 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:32.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:32.621 12:03:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:32.882 [2024-12-09 12:03:40.525042] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:28:32.882 [2024-12-09 12:03:40.525100] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid221780 ] 00:28:32.882 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:32.882 Zero copy mechanism will not be used. 00:28:32.882 [2024-12-09 12:03:40.610887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.882 [2024-12-09 12:03:40.639557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.453 12:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.453 12:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:33.453 12:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:33.453 12:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:33.714 12:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:33.714 12:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.714 12:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:33.714 12:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.714 12:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:33.714 12:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:33.975 nvme0n1 00:28:33.975 12:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:33.975 12:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.975 12:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:33.975 12:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.975 12:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:33.975 12:03:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:33.975 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:33.975 Zero copy mechanism will not be used. 00:28:33.975 Running I/O for 2 seconds... 00:28:33.975 [2024-12-09 12:03:41.828961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:33.975 [2024-12-09 12:03:41.829222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.975 [2024-12-09 12:03:41.829248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:33.975 [2024-12-09 12:03:41.834653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:33.975 [2024-12-09 12:03:41.834935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.975 [2024-12-09 12:03:41.834956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:33.975 [2024-12-09 12:03:41.843184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:33.975 [2024-12-09 12:03:41.843237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.975 [2024-12-09 12:03:41.843254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:33.975 [2024-12-09 12:03:41.847613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:33.975 [2024-12-09 12:03:41.847933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.975 [2024-12-09 12:03:41.847950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:33.975 [2024-12-09 12:03:41.852348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:33.975 [2024-12-09 12:03:41.852419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.975 [2024-12-09 12:03:41.852435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:33.975 [2024-12-09 12:03:41.857328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:33.975 [2024-12-09 12:03:41.857420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.975 [2024-12-09 12:03:41.857443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.237 [2024-12-09 12:03:41.861377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.237 [2024-12-09 12:03:41.861442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.237 [2024-12-09 12:03:41.861458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.237 [2024-12-09 12:03:41.866659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.237 [2024-12-09 12:03:41.866720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.237 [2024-12-09 12:03:41.866736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.237 [2024-12-09 12:03:41.873339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.237 [2024-12-09 12:03:41.873390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.237 [2024-12-09 12:03:41.873406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.237 [2024-12-09 12:03:41.877763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.237 [2024-12-09 12:03:41.877850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.237 [2024-12-09 12:03:41.877866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.237 [2024-12-09 12:03:41.883908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.237 [2024-12-09 12:03:41.884161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.237 [2024-12-09 12:03:41.884176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.237 [2024-12-09 12:03:41.888914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.237 [2024-12-09 12:03:41.888970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.237 [2024-12-09 12:03:41.888985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.237 [2024-12-09 12:03:41.893076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.237 [2024-12-09 12:03:41.893247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.237 [2024-12-09 12:03:41.893262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.237 [2024-12-09 12:03:41.900152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.237 [2024-12-09 12:03:41.900419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.237 [2024-12-09 12:03:41.900435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.237 [2024-12-09 12:03:41.907429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.237 [2024-12-09 12:03:41.907501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.237 [2024-12-09 12:03:41.907516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.237 [2024-12-09 12:03:41.915109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.237 [2024-12-09 12:03:41.915180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.237 [2024-12-09 12:03:41.915195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.237 [2024-12-09 12:03:41.919244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.237 [2024-12-09 12:03:41.919293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.237 [2024-12-09 12:03:41.919309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.237 [2024-12-09 12:03:41.924948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.237 [2024-12-09 12:03:41.925032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.237 [2024-12-09 12:03:41.925047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.237 [2024-12-09 12:03:41.929345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.237 [2024-12-09 12:03:41.929396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.237 [2024-12-09 12:03:41.929411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.237 [2024-12-09 12:03:41.933465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:41.933512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:41.933527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:41.938101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:41.938179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:41.938195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:41.942362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:41.942417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:41.942432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:41.946906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:41.947199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:41.947215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:41.952033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:41.952088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:41.952104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:41.957629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:41.957930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:41.957946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:41.965658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:41.965753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:41.965769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:41.969966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:41.970026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:41.970041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:41.975093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:41.975180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:41.975196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:41.981074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:41.981124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:41.981139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:41.985147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:41.985204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:41.985219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:41.989073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:41.989156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:41.989172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:41.993216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:41.993262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:41.993281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:41.997906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:41.997978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:41.997994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:42.001517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:42.001769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:42.001785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:42.007434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:42.007480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:42.007495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:42.011039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:42.011108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:42.011123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:42.018055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:42.018123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:42.018138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:42.023959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:42.024214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:42.024229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:42.028230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:42.028310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:42.028326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:42.032438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:42.032485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:42.032500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:42.037439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:42.037490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:42.037505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:42.044690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:42.044771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:42.044787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:42.052145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:42.052221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:42.052236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:42.057138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:42.057216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:42.057231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:42.061436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:42.061484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:42.061499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:42.065660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:42.065716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:42.065731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.238 [2024-12-09 12:03:42.072443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.238 [2024-12-09 12:03:42.072523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.238 [2024-12-09 12:03:42.072538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.239 [2024-12-09 12:03:42.076954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.239 [2024-12-09 12:03:42.077215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.239 [2024-12-09 12:03:42.077230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.239 [2024-12-09 12:03:42.084079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.239 [2024-12-09 12:03:42.084163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.239 [2024-12-09 12:03:42.084179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.239 [2024-12-09 12:03:42.087848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.239 [2024-12-09 12:03:42.087926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.239 [2024-12-09 12:03:42.087941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.239 [2024-12-09 12:03:42.092212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.239 [2024-12-09 12:03:42.092275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.239 [2024-12-09 12:03:42.092291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.239 [2024-12-09 12:03:42.096000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.239 [2024-12-09 12:03:42.096069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.239 [2024-12-09 12:03:42.096084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.239 [2024-12-09 12:03:42.100384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.239 [2024-12-09 12:03:42.100431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.239 [2024-12-09 12:03:42.100447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.239 [2024-12-09 12:03:42.106407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.239 [2024-12-09 12:03:42.106459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.239 [2024-12-09 12:03:42.106473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.239 [2024-12-09 12:03:42.112511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.239 [2024-12-09 12:03:42.112571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.239 [2024-12-09 12:03:42.112586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.239 [2024-12-09 12:03:42.116333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.239 [2024-12-09 12:03:42.116383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.239 [2024-12-09 12:03:42.116398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.501 [2024-12-09 12:03:42.121237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.501 [2024-12-09 12:03:42.121298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.501 [2024-12-09 12:03:42.121313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.501 [2024-12-09 12:03:42.124956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.501 [2024-12-09 12:03:42.125019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.501 [2024-12-09 12:03:42.125038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.501 [2024-12-09 12:03:42.129132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.501 [2024-12-09 12:03:42.129181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.501 [2024-12-09 12:03:42.129197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.501 [2024-12-09 12:03:42.133150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.501 [2024-12-09 12:03:42.133201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.501 [2024-12-09 12:03:42.133216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.501 [2024-12-09 12:03:42.137046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.501 [2024-12-09 12:03:42.137096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.501 [2024-12-09 12:03:42.137112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.501 [2024-12-09 12:03:42.140978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.501 [2024-12-09 12:03:42.141079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.501 [2024-12-09 12:03:42.141094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.501 [2024-12-09 12:03:42.144476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.501 [2024-12-09 12:03:42.144544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.501 [2024-12-09 12:03:42.144559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.501 [2024-12-09 12:03:42.148352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.501 [2024-12-09 12:03:42.148423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.501 [2024-12-09 12:03:42.148438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.501 [2024-12-09 12:03:42.152252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.501 [2024-12-09 12:03:42.152322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.501 [2024-12-09 12:03:42.152337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.501 [2024-12-09 12:03:42.155661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.501 [2024-12-09 12:03:42.155743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.501 [2024-12-09 12:03:42.155758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.501 [2024-12-09 12:03:42.163430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.501 [2024-12-09 12:03:42.163494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.501 [2024-12-09 12:03:42.163509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.501 [2024-12-09 12:03:42.168076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.501 [2024-12-09 12:03:42.168142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.501 [2024-12-09 12:03:42.168158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.501 [2024-12-09 12:03:42.171981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.501 [2024-12-09 12:03:42.172283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.501 [2024-12-09 12:03:42.172299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.501 [2024-12-09 12:03:42.175890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.501 [2024-12-09 12:03:42.175972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.501 [2024-12-09 12:03:42.175987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.501 [2024-12-09 12:03:42.179996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.501 [2024-12-09 12:03:42.180092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.501 [2024-12-09 12:03:42.180107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.501 [2024-12-09 12:03:42.183962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.501 [2024-12-09 12:03:42.184043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.501 [2024-12-09 12:03:42.184058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.501 [2024-12-09 12:03:42.187747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.501 [2024-12-09 12:03:42.187803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.501 [2024-12-09 12:03:42.187818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.191290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.191365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.191380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.197411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.197730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.197747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.206588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.206850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.206865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.216219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.216453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.216468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.226038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.226312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.226328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.234669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.234982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.234998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.243763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.243870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.243885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.252323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.252568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.252584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.263171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.263479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.263495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.268132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.268189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.268204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.272060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.272106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.272124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.275833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.275909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.275924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.280337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.280388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.280403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.284330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.284397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.284412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.288110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.288174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.288189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.291721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.291799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.291814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.295054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.295119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.295134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.298559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.298619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.298635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.303121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.303184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.303199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.306392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.306462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.306477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.309899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.309959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.309974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.313979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.314066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.314082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.319274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.319340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.319355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.323650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.323734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.323749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.330818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.330889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.330905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.334904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.334987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.335002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.339120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.339166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.502 [2024-12-09 12:03:42.339182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.502 [2024-12-09 12:03:42.345526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.502 [2024-12-09 12:03:42.345588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.503 [2024-12-09 12:03:42.345604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.503 [2024-12-09 12:03:42.349988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.503 [2024-12-09 12:03:42.350109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.503 [2024-12-09 12:03:42.350125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.503 [2024-12-09 12:03:42.354994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.503 [2024-12-09 12:03:42.355067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.503 [2024-12-09 12:03:42.355083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.503 [2024-12-09 12:03:42.362153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.503 [2024-12-09 12:03:42.362390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.503 [2024-12-09 12:03:42.362405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.503 [2024-12-09 12:03:42.369930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.503 [2024-12-09 12:03:42.369986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.503 [2024-12-09 12:03:42.370002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.503 [2024-12-09 12:03:42.375860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.503 [2024-12-09 12:03:42.375930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.503 [2024-12-09 12:03:42.375945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.503 [2024-12-09 12:03:42.380417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.503 [2024-12-09 12:03:42.380480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.503 [2024-12-09 12:03:42.380495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.765 [2024-12-09 12:03:42.387205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.765 [2024-12-09 12:03:42.387271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.765 [2024-12-09 12:03:42.387286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.765 [2024-12-09 12:03:42.393481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.765 [2024-12-09 12:03:42.393529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.765 [2024-12-09 12:03:42.393544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.765 [2024-12-09 12:03:42.397862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.765 [2024-12-09 12:03:42.397933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.765 [2024-12-09 12:03:42.397951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.765 [2024-12-09 12:03:42.402231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.765 [2024-12-09 12:03:42.402299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.765 [2024-12-09 12:03:42.402314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.765 [2024-12-09 12:03:42.407336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.765 [2024-12-09 12:03:42.407408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.765 [2024-12-09 12:03:42.407423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.765 [2024-12-09 12:03:42.412941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.765 [2024-12-09 12:03:42.413010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.765 [2024-12-09 12:03:42.413026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.765 [2024-12-09 12:03:42.421838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.765 [2024-12-09 12:03:42.421897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.765 [2024-12-09 12:03:42.421913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.765 [2024-12-09 12:03:42.427849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.765 [2024-12-09 12:03:42.427895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.765 [2024-12-09 12:03:42.427910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.765 [2024-12-09 12:03:42.432838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.765 [2024-12-09 12:03:42.432917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.765 [2024-12-09 12:03:42.432933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.765 [2024-12-09 12:03:42.437499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.765 [2024-12-09 12:03:42.437545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.765 [2024-12-09 12:03:42.437560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.765 [2024-12-09 12:03:42.444858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.765 [2024-12-09 12:03:42.444906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.765 [2024-12-09 12:03:42.444922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.765 [2024-12-09 12:03:42.453775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.765 [2024-12-09 12:03:42.453832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.765 [2024-12-09 12:03:42.453847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.765 [2024-12-09 12:03:42.462256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.765 [2024-12-09 12:03:42.462313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.765 [2024-12-09 12:03:42.462328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.765 [2024-12-09 12:03:42.473014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.765 [2024-12-09 12:03:42.473084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.765 [2024-12-09 12:03:42.473099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.765 [2024-12-09 12:03:42.482155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.765 [2024-12-09 12:03:42.482218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.765 [2024-12-09 12:03:42.482233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.765 [2024-12-09 12:03:42.490163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.765 [2024-12-09 12:03:42.490223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.765 [2024-12-09 12:03:42.490239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.765 [2024-12-09 12:03:42.498381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.765 [2024-12-09 12:03:42.498436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.765 [2024-12-09 12:03:42.498451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.765 [2024-12-09 12:03:42.503034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.765 [2024-12-09 12:03:42.503089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.765 [2024-12-09 12:03:42.503104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.765 [2024-12-09 12:03:42.506958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.765 [2024-12-09 12:03:42.507010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.765 [2024-12-09 12:03:42.507025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.765 [2024-12-09 12:03:42.511081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.765 [2024-12-09 12:03:42.511158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.765 [2024-12-09 12:03:42.511174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.765 [2024-12-09 12:03:42.515097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.765 [2024-12-09 12:03:42.515144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.765 [2024-12-09 12:03:42.515160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.765 [2024-12-09 12:03:42.518621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.518717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.518732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.522598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.522667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.522683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.526181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.526240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.526255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.529611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.529677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.529693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.534919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.535207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.535223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.538760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.538835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.538851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.541592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.541776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.541791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.544835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.545080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.545099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.550499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.550836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.550852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.554875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.555090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.555106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.558169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.558412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.558428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.561513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.561733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.561749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.564627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.564864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.564879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.570372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.570450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.570466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.575651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.575853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.575868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.578796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.579002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.579018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.581917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.582121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.582137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.586150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.586267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.586282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.593916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.594147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.594162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.598168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.598375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.598391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.606630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.606979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.606996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.616205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.616483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.616499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.625805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.626150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.626166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.635767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.636004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.636019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.766 [2024-12-09 12:03:42.645894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:34.766 [2024-12-09 12:03:42.646206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.766 [2024-12-09 12:03:42.646223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.655999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.656235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.656251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.665488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.665789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.665805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.675368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.675583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.675599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.685170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.685479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.685496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.694789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.695034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.695049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.705236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.705471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.705487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.714007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.714250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.714266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.723592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.723907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.723924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.733381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.733622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.733646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.741484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.741863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.741880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.751014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.751177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.751192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.758938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.759138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.759153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.762783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.762990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.763006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.766552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.766763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.766779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.770381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.770588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.770603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.777551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.777801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.777817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.785071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.785284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.785300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.788838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.789046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.789062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.792655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.792856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.792871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.797401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.797611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.797627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.801944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.802185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.802200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.806126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.806416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.806432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.814610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.814960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.814977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.028 5442.00 IOPS, 680.25 MiB/s [2024-12-09T11:03:42.914Z] [2024-12-09 12:03:42.824941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.825226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.825243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.835296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.835495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.835512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.845956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.846281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.846298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.028 [2024-12-09 12:03:42.856036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.028 [2024-12-09 12:03:42.856273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.028 [2024-12-09 12:03:42.856288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.029 [2024-12-09 12:03:42.867118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.029 [2024-12-09 12:03:42.867373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.029 [2024-12-09 12:03:42.867390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.029 [2024-12-09 12:03:42.877793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.029 [2024-12-09 12:03:42.878144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.029 [2024-12-09 12:03:42.878161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.029 [2024-12-09 12:03:42.888915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.029 [2024-12-09 12:03:42.889161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.029 [2024-12-09 12:03:42.889179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.029 [2024-12-09 12:03:42.899989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.029 [2024-12-09 12:03:42.900209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.029 [2024-12-09 12:03:42.900224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.029 [2024-12-09 12:03:42.910199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.029 [2024-12-09 12:03:42.910461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.029 [2024-12-09 12:03:42.910477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.291 [2024-12-09 12:03:42.920610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.291 [2024-12-09 12:03:42.920832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.291 [2024-12-09 12:03:42.920849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.291 [2024-12-09 12:03:42.929035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.291 [2024-12-09 12:03:42.929225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.291 [2024-12-09 12:03:42.929241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.291 [2024-12-09 12:03:42.933780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.291 [2024-12-09 12:03:42.933972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.291 [2024-12-09 12:03:42.933991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.291 [2024-12-09 12:03:42.938670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.291 [2024-12-09 12:03:42.938878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.291 [2024-12-09 12:03:42.938894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.291 [2024-12-09 12:03:42.943246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.291 [2024-12-09 12:03:42.943448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.291 [2024-12-09 12:03:42.943464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.291 [2024-12-09 12:03:42.947482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.291 [2024-12-09 12:03:42.947688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.291 [2024-12-09 12:03:42.947705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.291 [2024-12-09 12:03:42.951517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.291 [2024-12-09 12:03:42.951712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.291 [2024-12-09 12:03:42.951729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.291 [2024-12-09 12:03:42.955198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.291 [2024-12-09 12:03:42.955387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.291 [2024-12-09 12:03:42.955404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.291 [2024-12-09 12:03:42.962245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.291 [2024-12-09 12:03:42.962538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.291 [2024-12-09 12:03:42.962556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.291 [2024-12-09 12:03:42.966124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.291 [2024-12-09 12:03:42.966326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.291 [2024-12-09 12:03:42.966342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.291 [2024-12-09 12:03:42.970047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:42.970248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:42.970264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:42.973789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:42.973982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:42.973998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:42.977497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:42.977693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:42.977709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:42.981412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:42.981602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:42.981618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:42.985651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:42.985841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:42.985857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:42.989127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:42.989318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:42.989334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:42.992694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:42.992997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:42.993014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:42.998521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:42.998808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:42.998826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:43.004627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:43.004825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:43.004842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:43.008860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:43.009170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:43.009187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:43.015511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:43.015894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:43.015911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:43.019773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:43.019975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:43.019991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:43.024186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:43.024377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:43.024393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:43.032728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:43.033050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:43.033068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:43.039442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:43.039648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:43.039664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:43.048836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:43.049167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:43.049183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:43.057243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:43.057459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:43.057475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:43.067612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:43.067930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:43.067947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:43.073463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:43.073669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:43.073688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:43.078634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:43.078973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:43.078990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:43.086953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:43.087250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:43.087267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:43.095163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:43.095353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:43.095370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:43.101887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:43.102078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:43.102094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:43.109223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:43.109532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:43.109549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:43.117246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:43.117562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:43.117579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:43.124230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:43.124559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:43.124576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:43.128809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:43.129011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:43.129026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:43.138279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.292 [2024-12-09 12:03:43.138631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.292 [2024-12-09 12:03:43.138653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.292 [2024-12-09 12:03:43.144970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.293 [2024-12-09 12:03:43.145280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.293 [2024-12-09 12:03:43.145297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.293 [2024-12-09 12:03:43.152043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.293 [2024-12-09 12:03:43.152385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.293 [2024-12-09 12:03:43.152402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.293 [2024-12-09 12:03:43.158318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.293 [2024-12-09 12:03:43.158367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.293 [2024-12-09 12:03:43.158382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.293 [2024-12-09 12:03:43.167740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.293 [2024-12-09 12:03:43.168078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.293 [2024-12-09 12:03:43.168095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.176410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.554 [2024-12-09 12:03:43.176735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-12-09 12:03:43.176752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.182172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.554 [2024-12-09 12:03:43.182363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-12-09 12:03:43.182380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.192749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.554 [2024-12-09 12:03:43.192970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-12-09 12:03:43.192987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.203183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.554 [2024-12-09 12:03:43.203508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-12-09 12:03:43.203524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.214672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.554 [2024-12-09 12:03:43.215000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-12-09 12:03:43.215017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.225408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.554 [2024-12-09 12:03:43.225622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-12-09 12:03:43.225643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.236055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.554 [2024-12-09 12:03:43.236287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-12-09 12:03:43.236303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.246674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.554 [2024-12-09 12:03:43.247032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-12-09 12:03:43.247049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.257461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.554 [2024-12-09 12:03:43.257820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-12-09 12:03:43.257837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.268140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.554 [2024-12-09 12:03:43.268345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-12-09 12:03:43.268361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.278107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.554 [2024-12-09 12:03:43.278423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-12-09 12:03:43.278441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.289759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.554 [2024-12-09 12:03:43.290094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-12-09 12:03:43.290111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.300549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.554 [2024-12-09 12:03:43.300852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-12-09 12:03:43.300873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.310405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.554 [2024-12-09 12:03:43.310618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-12-09 12:03:43.310634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.320769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.554 [2024-12-09 12:03:43.321000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-12-09 12:03:43.321025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.331789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.554 [2024-12-09 12:03:43.332029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-12-09 12:03:43.332044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.342606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.554 [2024-12-09 12:03:43.342865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-12-09 12:03:43.342881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.353254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.554 [2024-12-09 12:03:43.353476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-12-09 12:03:43.353491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.364843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.554 [2024-12-09 12:03:43.365112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-12-09 12:03:43.365128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.374645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.554 [2024-12-09 12:03:43.374882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-12-09 12:03:43.374898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.384659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.554 [2024-12-09 12:03:43.384984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-12-09 12:03:43.385001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.395234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.554 [2024-12-09 12:03:43.395652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-12-09 12:03:43.395669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.406164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.554 [2024-12-09 12:03:43.406487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-12-09 12:03:43.406503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.554 [2024-12-09 12:03:43.416431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.555 [2024-12-09 12:03:43.416673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-12-09 12:03:43.416690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.555 [2024-12-09 12:03:43.426885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.555 [2024-12-09 12:03:43.427158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-12-09 12:03:43.427175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.815 [2024-12-09 12:03:43.438107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.815 [2024-12-09 12:03:43.438310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.815 [2024-12-09 12:03:43.438326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.815 [2024-12-09 12:03:43.448667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.815 [2024-12-09 12:03:43.448859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.815 [2024-12-09 12:03:43.448876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.815 [2024-12-09 12:03:43.459817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.815 [2024-12-09 12:03:43.460063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.815 [2024-12-09 12:03:43.460079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.815 [2024-12-09 12:03:43.470435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.815 [2024-12-09 12:03:43.470736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.815 [2024-12-09 12:03:43.470753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.815 [2024-12-09 12:03:43.480761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.815 [2024-12-09 12:03:43.480973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.815 [2024-12-09 12:03:43.480990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.816 [2024-12-09 12:03:43.491473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.816 [2024-12-09 12:03:43.491699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.816 [2024-12-09 12:03:43.491716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.816 [2024-12-09 12:03:43.502656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.816 [2024-12-09 12:03:43.502924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.816 [2024-12-09 12:03:43.502941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.816 [2024-12-09 12:03:43.513187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.816 [2024-12-09 12:03:43.513507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.816 [2024-12-09 12:03:43.513524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.816 [2024-12-09 12:03:43.524109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.816 [2024-12-09 12:03:43.524479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.816 [2024-12-09 12:03:43.524496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.816 [2024-12-09 12:03:43.534906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.816 [2024-12-09 12:03:43.535118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.816 [2024-12-09 12:03:43.535134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.816 [2024-12-09 12:03:43.545455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.816 [2024-12-09 12:03:43.545677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.816 [2024-12-09 12:03:43.545692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.816 [2024-12-09 12:03:43.555850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.816 [2024-12-09 12:03:43.556072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.816 [2024-12-09 12:03:43.556088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.816 [2024-12-09 12:03:43.565592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.816 [2024-12-09 12:03:43.565836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.816 [2024-12-09 12:03:43.565853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.816 [2024-12-09 12:03:43.575289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.816 [2024-12-09 12:03:43.575514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.816 [2024-12-09 12:03:43.575534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.816 [2024-12-09 12:03:43.586108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.816 [2024-12-09 12:03:43.586404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.816 [2024-12-09 12:03:43.586421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.816 [2024-12-09 12:03:43.597054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.816 [2024-12-09 12:03:43.597307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.816 [2024-12-09 12:03:43.597324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.816 [2024-12-09 12:03:43.608122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.816 [2024-12-09 12:03:43.608449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.816 [2024-12-09 12:03:43.608466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.816 [2024-12-09 12:03:43.619041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.816 [2024-12-09 12:03:43.619357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.816 [2024-12-09 12:03:43.619375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.816 [2024-12-09 12:03:43.630185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.816 [2024-12-09 12:03:43.630497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.816 [2024-12-09 12:03:43.630514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.816 [2024-12-09 12:03:43.640970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.816 [2024-12-09 12:03:43.641178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.816 [2024-12-09 12:03:43.641194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.816 [2024-12-09 12:03:43.651652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.816 [2024-12-09 12:03:43.651893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.816 [2024-12-09 12:03:43.651908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.816 [2024-12-09 12:03:43.661766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.816 [2024-12-09 12:03:43.661969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.816 [2024-12-09 12:03:43.661985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.816 [2024-12-09 12:03:43.668675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.816 [2024-12-09 12:03:43.668970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.816 [2024-12-09 12:03:43.668987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.816 [2024-12-09 12:03:43.674121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.816 [2024-12-09 12:03:43.674441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.816 [2024-12-09 12:03:43.674458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.816 [2024-12-09 12:03:43.682348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.816 [2024-12-09 12:03:43.682538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.816 [2024-12-09 12:03:43.682555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.816 [2024-12-09 12:03:43.688088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.816 [2024-12-09 12:03:43.688291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.816 [2024-12-09 12:03:43.688308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.816 [2024-12-09 12:03:43.693503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.816 [2024-12-09 12:03:43.693700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.816 [2024-12-09 12:03:43.693717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.816 [2024-12-09 12:03:43.698382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:35.816 [2024-12-09 12:03:43.698572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.816 [2024-12-09 12:03:43.698588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.077 [2024-12-09 12:03:43.704321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:36.077 [2024-12-09 12:03:43.704524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.077 [2024-12-09 12:03:43.704541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.077 [2024-12-09 12:03:43.710335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:36.077 [2024-12-09 12:03:43.710729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.077 [2024-12-09 12:03:43.710747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.077 [2024-12-09 12:03:43.719243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:36.077 [2024-12-09 12:03:43.719655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.077 [2024-12-09 12:03:43.719672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.077 [2024-12-09 12:03:43.726222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:36.077 [2024-12-09 12:03:43.726519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.077 [2024-12-09 12:03:43.726536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.077 [2024-12-09 12:03:43.735807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:36.077 [2024-12-09 12:03:43.736135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.077 [2024-12-09 12:03:43.736152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.077 [2024-12-09 12:03:43.745226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:36.077 [2024-12-09 12:03:43.745479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.077 [2024-12-09 12:03:43.745495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.077 [2024-12-09 12:03:43.751935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:36.077 [2024-12-09 12:03:43.752125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.077 [2024-12-09 12:03:43.752142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.077 [2024-12-09 12:03:43.758408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:36.077 [2024-12-09 12:03:43.758733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.077 [2024-12-09 12:03:43.758750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.077 [2024-12-09 12:03:43.766294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:36.077 [2024-12-09 12:03:43.766611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.077 [2024-12-09 12:03:43.766628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.077 [2024-12-09 12:03:43.771426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:36.077 [2024-12-09 12:03:43.771619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.077 [2024-12-09 12:03:43.771635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.077 [2024-12-09 12:03:43.777666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:36.077 [2024-12-09 12:03:43.777957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.077 [2024-12-09 12:03:43.777974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.077 [2024-12-09 12:03:43.784444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:36.077 [2024-12-09 12:03:43.784761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.077 [2024-12-09 12:03:43.784781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.077 [2024-12-09 12:03:43.790931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:36.077 [2024-12-09 12:03:43.791266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.077 [2024-12-09 12:03:43.791283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.077 [2024-12-09 12:03:43.798643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:36.077 [2024-12-09 12:03:43.798967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.077 [2024-12-09 12:03:43.798985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.077 [2024-12-09 12:03:43.807499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:36.077 [2024-12-09 12:03:43.807819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.077 [2024-12-09 12:03:43.807836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.078 [2024-12-09 12:03:43.816771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:36.078 [2024-12-09 12:03:43.817132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.078 [2024-12-09 12:03:43.817149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.078 [2024-12-09 12:03:43.821750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc91f0) with pdu=0x200016eff3c8 00:28:36.078 [2024-12-09 12:03:43.821944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.078 [2024-12-09 12:03:43.821960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.078 4589.50 IOPS, 573.69 MiB/s 00:28:36.078 Latency(us) 00:28:36.078 [2024-12-09T11:03:43.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.078 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:36.078 nvme0n1 : 2.01 4585.86 573.23 0.00 0.00 3482.65 1426.77 14199.47 00:28:36.078 [2024-12-09T11:03:43.964Z] =================================================================================================================== 00:28:36.078 [2024-12-09T11:03:43.964Z] Total : 4585.86 573.23 0.00 0.00 3482.65 1426.77 14199.47 00:28:36.078 { 00:28:36.078 "results": [ 00:28:36.078 { 00:28:36.078 "job": "nvme0n1", 00:28:36.078 "core_mask": "0x2", 00:28:36.078 "workload": "randwrite", 00:28:36.078 "status": "finished", 00:28:36.078 "queue_depth": 16, 00:28:36.078 "io_size": 131072, 00:28:36.078 "runtime": 2.005729, 00:28:36.078 "iops": 4585.863793164481, 00:28:36.078 "mibps": 573.2329741455601, 00:28:36.078 "io_failed": 0, 00:28:36.078 "io_timeout": 0, 00:28:36.078 "avg_latency_us": 3482.6524258896857, 00:28:36.078 "min_latency_us": 1426.7733333333333, 00:28:36.078 "max_latency_us": 14199.466666666667 00:28:36.078 } 00:28:36.078 ], 00:28:36.078 "core_count": 1 00:28:36.078 } 00:28:36.078 12:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:36.078 12:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:36.078 12:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:36.078 | .driver_specific 00:28:36.078 | .nvme_error 00:28:36.078 | .status_code 00:28:36.078 | .command_transient_transport_error' 00:28:36.078 12:03:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:36.338 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 297 > 0 )) 00:28:36.338 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 221780 00:28:36.338 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 221780 ']' 00:28:36.338 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 221780 00:28:36.338 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:36.338 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:36.338 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 221780 00:28:36.338 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:36.338 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:36.338 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 221780' 00:28:36.338 killing process with pid 221780 00:28:36.338 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 221780 00:28:36.338 Received shutdown signal, test time was about 2.000000 seconds 00:28:36.338 00:28:36.338 Latency(us) 00:28:36.338 [2024-12-09T11:03:44.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.338 [2024-12-09T11:03:44.224Z] =================================================================================================================== 00:28:36.338 [2024-12-09T11:03:44.224Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:36.338 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 221780 00:28:36.338 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 219310 00:28:36.338 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 219310 ']' 00:28:36.338 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 219310 00:28:36.338 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:36.338 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:36.338 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 219310 00:28:36.599 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:36.599 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:36.599 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 219310' 00:28:36.599 killing process with pid 219310 00:28:36.599 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 219310 00:28:36.599 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 219310 00:28:36.599 00:28:36.599 real 0m16.454s 00:28:36.599 user 0m32.766s 00:28:36.599 sys 0m3.459s 00:28:36.599 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:36.599 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.599 ************************************ 00:28:36.599 END TEST nvmf_digest_error 00:28:36.599 ************************************ 00:28:36.599 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:36.599 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:36.599 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:36.599 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # sync 00:28:36.599 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:28:36.599 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # set +e 00:28:36.599 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # for i in {1..20} 00:28:36.599 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:28:36.599 rmmod nvme_tcp 00:28:36.599 rmmod nvme_fabrics 00:28:36.599 rmmod nvme_keyring 00:28:36.599 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:28:36.599 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # set -e 00:28:36.599 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@130 -- # return 0 00:28:36.599 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 219310 ']' 00:28:36.859 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 219310 00:28:36.859 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 219310 ']' 00:28:36.859 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 219310 00:28:36.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (219310) - No such process 00:28:36.859 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 219310 is not found' 00:28:36.859 Process with pid 219310 is not found 00:28:36.859 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:36.859 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:36.859 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:36.859 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # iptr 00:28:36.859 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:28:36.859 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:36.859 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:28:36.859 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:36.859 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # remove_spdk_ns 00:28:36.859 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.859 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:36.859 12:03:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.771 12:03:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:28:38.771 00:28:38.771 real 0m43.279s 00:28:38.771 user 1m8.039s 00:28:38.771 sys 0m13.020s 00:28:38.771 12:03:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:38.771 12:03:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:38.771 ************************************ 00:28:38.771 END TEST nvmf_digest 00:28:38.771 ************************************ 00:28:38.771 12:03:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:38.771 12:03:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:38.771 12:03:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:38.771 12:03:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:38.771 12:03:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:38.771 12:03:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:38.771 12:03:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.033 ************************************ 00:28:39.033 START TEST nvmf_bdevperf 00:28:39.033 ************************************ 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:39.033 * Looking for test storage... 00:28:39.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:39.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.033 --rc genhtml_branch_coverage=1 00:28:39.033 --rc genhtml_function_coverage=1 00:28:39.033 --rc genhtml_legend=1 00:28:39.033 --rc geninfo_all_blocks=1 00:28:39.033 --rc geninfo_unexecuted_blocks=1 00:28:39.033 00:28:39.033 ' 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:39.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.033 --rc genhtml_branch_coverage=1 00:28:39.033 --rc genhtml_function_coverage=1 00:28:39.033 --rc genhtml_legend=1 00:28:39.033 --rc geninfo_all_blocks=1 00:28:39.033 --rc geninfo_unexecuted_blocks=1 00:28:39.033 00:28:39.033 ' 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:39.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.033 --rc genhtml_branch_coverage=1 00:28:39.033 --rc genhtml_function_coverage=1 00:28:39.033 --rc genhtml_legend=1 00:28:39.033 --rc geninfo_all_blocks=1 00:28:39.033 --rc geninfo_unexecuted_blocks=1 00:28:39.033 00:28:39.033 ' 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:39.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.033 --rc genhtml_branch_coverage=1 00:28:39.033 --rc genhtml_function_coverage=1 00:28:39.033 --rc genhtml_legend=1 00:28:39.033 --rc geninfo_all_blocks=1 00:28:39.033 --rc geninfo_unexecuted_blocks=1 00:28:39.033 00:28:39.033 ' 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.033 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # : 0 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:28:39.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@56 -- # have_pci_nics=0 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # xtrace_disable 00:28:39.034 12:03:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_devs=() 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_devs 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_net_devs=() 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # pci_drivers=() 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # local -A pci_drivers 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # net_devs=() 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga net_devs 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # e810=() 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga e810 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # x722=() 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga x722 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@323 -- # mlx=() 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@323 -- # local -ga mlx 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:28:47.240 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:47.241 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:47.241 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:47.241 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:47.241 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # is_hw=yes 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:28:47.241 12:03:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:28:47.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:47.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:28:47.241 00:28:47.241 --- 10.0.0.2 ping statistics --- 00:28:47.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.241 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:47.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:47.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:28:47.241 00:28:47.241 --- 10.0.0.1 ping statistics --- 00:28:47.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.241 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # return 0 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=226736 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 226736 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 226736 ']' 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:47.241 12:03:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.242 [2024-12-09 12:03:54.311680] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:28:47.242 [2024-12-09 12:03:54.311751] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:47.242 [2024-12-09 12:03:54.411477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:47.242 [2024-12-09 12:03:54.463459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:47.242 [2024-12-09 12:03:54.463514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:47.242 [2024-12-09 12:03:54.463522] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:47.242 [2024-12-09 12:03:54.463529] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:47.242 [2024-12-09 12:03:54.463536] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:47.242 [2024-12-09 12:03:54.465383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:47.242 [2024-12-09 12:03:54.465549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.242 [2024-12-09 12:03:54.465550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:47.242 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:47.242 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:47.242 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:47.242 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:47.242 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.526 [2024-12-09 12:03:55.157535] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.526 Malloc0 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.526 [2024-12-09 12:03:55.228289] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:47.526 { 00:28:47.526 "params": { 00:28:47.526 "name": "Nvme$subsystem", 00:28:47.526 "trtype": "$TEST_TRANSPORT", 00:28:47.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.526 "adrfam": "ipv4", 00:28:47.526 "trsvcid": "$NVMF_PORT", 00:28:47.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.526 "hdgst": ${hdgst:-false}, 00:28:47.526 "ddgst": ${ddgst:-false} 00:28:47.526 }, 00:28:47.526 "method": "bdev_nvme_attach_controller" 00:28:47.526 } 00:28:47.526 EOF 00:28:47.526 )") 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:28:47.526 12:03:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:28:47.526 "params": { 00:28:47.526 "name": "Nvme1", 00:28:47.526 "trtype": "tcp", 00:28:47.526 "traddr": "10.0.0.2", 00:28:47.526 "adrfam": "ipv4", 00:28:47.526 "trsvcid": "4420", 00:28:47.526 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:47.526 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:47.526 "hdgst": false, 00:28:47.526 "ddgst": false 00:28:47.526 }, 00:28:47.526 "method": "bdev_nvme_attach_controller" 00:28:47.526 }' 00:28:47.526 [2024-12-09 12:03:55.284261] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:28:47.526 [2024-12-09 12:03:55.284319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid227027 ] 00:28:47.526 [2024-12-09 12:03:55.373458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.823 [2024-12-09 12:03:55.409497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.083 Running I/O for 1 seconds... 00:28:49.026 8631.00 IOPS, 33.71 MiB/s 00:28:49.026 Latency(us) 00:28:49.026 [2024-12-09T11:03:56.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.026 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:49.026 Verification LBA range: start 0x0 length 0x4000 00:28:49.026 Nvme1n1 : 1.00 8721.27 34.07 0.00 0.00 14615.84 1215.15 15182.51 00:28:49.026 [2024-12-09T11:03:56.912Z] =================================================================================================================== 00:28:49.026 [2024-12-09T11:03:56.912Z] Total : 8721.27 34.07 0.00 0.00 14615.84 1215.15 15182.51 00:28:49.026 12:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=227298 00:28:49.026 12:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:49.027 12:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:49.027 12:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:49.027 12:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:28:49.027 12:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:28:49.027 12:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:49.027 12:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:49.027 { 00:28:49.027 "params": { 00:28:49.027 "name": "Nvme$subsystem", 00:28:49.027 "trtype": "$TEST_TRANSPORT", 00:28:49.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:49.027 "adrfam": "ipv4", 00:28:49.027 "trsvcid": "$NVMF_PORT", 00:28:49.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:49.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:49.027 "hdgst": ${hdgst:-false}, 00:28:49.027 "ddgst": ${ddgst:-false} 00:28:49.027 }, 00:28:49.027 "method": "bdev_nvme_attach_controller" 00:28:49.027 } 00:28:49.027 EOF 00:28:49.027 )") 00:28:49.027 12:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:28:49.027 12:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:28:49.027 12:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:28:49.027 12:03:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:28:49.027 "params": { 00:28:49.027 "name": "Nvme1", 00:28:49.027 "trtype": "tcp", 00:28:49.027 "traddr": "10.0.0.2", 00:28:49.027 "adrfam": "ipv4", 00:28:49.027 "trsvcid": "4420", 00:28:49.027 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:49.027 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:49.027 "hdgst": false, 00:28:49.027 "ddgst": false 00:28:49.027 }, 00:28:49.027 "method": "bdev_nvme_attach_controller" 00:28:49.027 }' 00:28:49.027 [2024-12-09 12:03:56.895624] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:28:49.027 [2024-12-09 12:03:56.895684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid227298 ] 00:28:49.288 [2024-12-09 12:03:56.983786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.288 [2024-12-09 12:03:57.018445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.549 Running I/O for 15 seconds... 00:28:51.877 9258.00 IOPS, 36.16 MiB/s [2024-12-09T11:04:00.028Z] 10105.50 IOPS, 39.47 MiB/s [2024-12-09T11:04:00.028Z] 12:03:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 226736 00:28:52.142 12:03:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:52.142 [2024-12-09 12:03:59.858938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.858980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.858999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.142 [2024-12-09 12:03:59.859492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.142 [2024-12-09 12:03:59.859500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.859517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.859534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.859550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.859567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.859584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.859602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.859619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.859636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.859747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.859765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.859781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.859798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.859815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.859832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.859848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.859865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.859882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.859900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.859916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.859933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.859950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.859966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.859983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.859992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.860000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.860009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.860016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.860025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.860033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.860042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.860050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.860059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.860066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.860075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.860082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.860092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.860101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.860110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.860117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.860127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.860134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.860143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.860151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.860160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.860167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.860177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.860184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.860194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.860201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.860210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.860218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.860227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.860234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.860244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.860251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.860260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.143 [2024-12-09 12:03:59.860268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.143 [2024-12-09 12:03:59.860277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.144 [2024-12-09 12:03:59.860894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.144 [2024-12-09 12:03:59.860912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.144 [2024-12-09 12:03:59.860928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.144 [2024-12-09 12:03:59.860945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.144 [2024-12-09 12:03:59.860964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.144 [2024-12-09 12:03:59.860974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.145 [2024-12-09 12:03:59.860981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.145 [2024-12-09 12:03:59.860990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.145 [2024-12-09 12:03:59.860998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.145 [2024-12-09 12:03:59.861007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.145 [2024-12-09 12:03:59.861014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.145 [2024-12-09 12:03:59.861024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.145 [2024-12-09 12:03:59.861031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.145 [2024-12-09 12:03:59.861040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.145 [2024-12-09 12:03:59.861048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.145 [2024-12-09 12:03:59.861057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.145 [2024-12-09 12:03:59.861065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.145 [2024-12-09 12:03:59.861074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.145 [2024-12-09 12:03:59.861081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.145 [2024-12-09 12:03:59.861090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.145 [2024-12-09 12:03:59.861098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.145 [2024-12-09 12:03:59.861107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.145 [2024-12-09 12:03:59.861115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.145 [2024-12-09 12:03:59.861124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.145 [2024-12-09 12:03:59.861131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.145 [2024-12-09 12:03:59.861140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.145 [2024-12-09 12:03:59.861148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.145 [2024-12-09 12:03:59.861157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.145 [2024-12-09 12:03:59.861164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.145 [2024-12-09 12:03:59.861178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.145 [2024-12-09 12:03:59.861185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.145 [2024-12-09 12:03:59.861195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.145 [2024-12-09 12:03:59.861202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.145 [2024-12-09 12:03:59.861212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.145 [2024-12-09 12:03:59.861219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.145 [2024-12-09 12:03:59.861228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.145 [2024-12-09 12:03:59.861235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.145 [2024-12-09 12:03:59.861245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.145 [2024-12-09 12:03:59.861252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.145 [2024-12-09 12:03:59.861262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.145 [2024-12-09 12:03:59.861269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.145 [2024-12-09 12:03:59.861278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2eea0 is same with the state(6) to be set 00:28:52.145 [2024-12-09 12:03:59.861287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:52.145 [2024-12-09 12:03:59.861293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:52.145 [2024-12-09 12:03:59.861300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75800 len:8 PRP1 0x0 PRP2 0x0 00:28:52.145 [2024-12-09 12:03:59.861311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.145 [2024-12-09 12:03:59.864926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.145 [2024-12-09 12:03:59.864979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.145 [2024-12-09 12:03:59.865893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.145 [2024-12-09 12:03:59.865930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.145 [2024-12-09 12:03:59.865942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.145 [2024-12-09 12:03:59.866182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.145 [2024-12-09 12:03:59.866405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.145 [2024-12-09 12:03:59.866414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.145 [2024-12-09 12:03:59.866424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.145 [2024-12-09 12:03:59.866432] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.145 [2024-12-09 12:03:59.879131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.145 [2024-12-09 12:03:59.879740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.145 [2024-12-09 12:03:59.879779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.145 [2024-12-09 12:03:59.879791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.145 [2024-12-09 12:03:59.880030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.145 [2024-12-09 12:03:59.880252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.145 [2024-12-09 12:03:59.880262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.145 [2024-12-09 12:03:59.880270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.145 [2024-12-09 12:03:59.880278] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.145 [2024-12-09 12:03:59.892955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.145 [2024-12-09 12:03:59.893631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.145 [2024-12-09 12:03:59.893679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.145 [2024-12-09 12:03:59.893690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.145 [2024-12-09 12:03:59.893929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.145 [2024-12-09 12:03:59.894150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.145 [2024-12-09 12:03:59.894159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.145 [2024-12-09 12:03:59.894167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.145 [2024-12-09 12:03:59.894176] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.145 [2024-12-09 12:03:59.906868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.145 [2024-12-09 12:03:59.908205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.145 [2024-12-09 12:03:59.908231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.145 [2024-12-09 12:03:59.908240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.145 [2024-12-09 12:03:59.908467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.145 [2024-12-09 12:03:59.908692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.145 [2024-12-09 12:03:59.908702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.145 [2024-12-09 12:03:59.908709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.145 [2024-12-09 12:03:59.908716] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.145 [2024-12-09 12:03:59.920761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.145 [2024-12-09 12:03:59.921321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.145 [2024-12-09 12:03:59.921361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.145 [2024-12-09 12:03:59.921377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.145 [2024-12-09 12:03:59.921617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.145 [2024-12-09 12:03:59.921848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.145 [2024-12-09 12:03:59.921858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.145 [2024-12-09 12:03:59.921866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.145 [2024-12-09 12:03:59.921874] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.145 [2024-12-09 12:03:59.934549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.145 [2024-12-09 12:03:59.935246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.146 [2024-12-09 12:03:59.935289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.146 [2024-12-09 12:03:59.935302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.146 [2024-12-09 12:03:59.935542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.146 [2024-12-09 12:03:59.935774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.146 [2024-12-09 12:03:59.935785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.146 [2024-12-09 12:03:59.935794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.146 [2024-12-09 12:03:59.935802] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.146 [2024-12-09 12:03:59.948499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.146 [2024-12-09 12:03:59.949153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.146 [2024-12-09 12:03:59.949198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.146 [2024-12-09 12:03:59.949210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.146 [2024-12-09 12:03:59.949451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.146 [2024-12-09 12:03:59.949685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.146 [2024-12-09 12:03:59.949695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.146 [2024-12-09 12:03:59.949703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.146 [2024-12-09 12:03:59.949711] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.146 [2024-12-09 12:03:59.962386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.146 [2024-12-09 12:03:59.963033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.146 [2024-12-09 12:03:59.963079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.146 [2024-12-09 12:03:59.963091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.146 [2024-12-09 12:03:59.963334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.146 [2024-12-09 12:03:59.963564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.146 [2024-12-09 12:03:59.963573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.146 [2024-12-09 12:03:59.963581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.146 [2024-12-09 12:03:59.963589] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.146 [2024-12-09 12:03:59.976287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.146 [2024-12-09 12:03:59.976835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.146 [2024-12-09 12:03:59.976880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.146 [2024-12-09 12:03:59.976893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.146 [2024-12-09 12:03:59.977139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.146 [2024-12-09 12:03:59.977362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.146 [2024-12-09 12:03:59.977372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.146 [2024-12-09 12:03:59.977380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.146 [2024-12-09 12:03:59.977389] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.146 [2024-12-09 12:03:59.990113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.146 [2024-12-09 12:03:59.990646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.146 [2024-12-09 12:03:59.990669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.146 [2024-12-09 12:03:59.990678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.146 [2024-12-09 12:03:59.990897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.146 [2024-12-09 12:03:59.991116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.146 [2024-12-09 12:03:59.991125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.146 [2024-12-09 12:03:59.991132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.146 [2024-12-09 12:03:59.991140] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.146 [2024-12-09 12:04:00.004482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.146 [2024-12-09 12:04:00.005126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.146 [2024-12-09 12:04:00.005176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.146 [2024-12-09 12:04:00.005189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.146 [2024-12-09 12:04:00.005434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.146 [2024-12-09 12:04:00.005667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.146 [2024-12-09 12:04:00.005678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.146 [2024-12-09 12:04:00.005694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.146 [2024-12-09 12:04:00.005703] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.146 [2024-12-09 12:04:00.018283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.146 [2024-12-09 12:04:00.018824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.146 [2024-12-09 12:04:00.018849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.146 [2024-12-09 12:04:00.018859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.146 [2024-12-09 12:04:00.019080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.146 [2024-12-09 12:04:00.019299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.146 [2024-12-09 12:04:00.019309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.146 [2024-12-09 12:04:00.019317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.146 [2024-12-09 12:04:00.019324] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.409 [2024-12-09 12:04:00.032242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.409 [2024-12-09 12:04:00.032826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-12-09 12:04:00.032850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.409 [2024-12-09 12:04:00.032858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.409 [2024-12-09 12:04:00.033078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.409 [2024-12-09 12:04:00.033297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.409 [2024-12-09 12:04:00.033307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.410 [2024-12-09 12:04:00.033315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.410 [2024-12-09 12:04:00.033323] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.410 [2024-12-09 12:04:00.046059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.410 [2024-12-09 12:04:00.046604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-12-09 12:04:00.046677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.410 [2024-12-09 12:04:00.046693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.410 [2024-12-09 12:04:00.047023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.410 [2024-12-09 12:04:00.047281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.410 [2024-12-09 12:04:00.047291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.410 [2024-12-09 12:04:00.047299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.410 [2024-12-09 12:04:00.047308] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.410 [2024-12-09 12:04:00.060020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.410 [2024-12-09 12:04:00.060614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-12-09 12:04:00.060682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.410 [2024-12-09 12:04:00.060695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.410 [2024-12-09 12:04:00.060946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.410 [2024-12-09 12:04:00.061173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.410 [2024-12-09 12:04:00.061183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.410 [2024-12-09 12:04:00.061192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.410 [2024-12-09 12:04:00.061202] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.410 [2024-12-09 12:04:00.073919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.410 [2024-12-09 12:04:00.074683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-12-09 12:04:00.074747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.410 [2024-12-09 12:04:00.074762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.410 [2024-12-09 12:04:00.075017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.410 [2024-12-09 12:04:00.075243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.410 [2024-12-09 12:04:00.075254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.410 [2024-12-09 12:04:00.075263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.410 [2024-12-09 12:04:00.075272] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.410 [2024-12-09 12:04:00.087809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.410 [2024-12-09 12:04:00.088205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-12-09 12:04:00.088239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.410 [2024-12-09 12:04:00.088249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.410 [2024-12-09 12:04:00.088472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.410 [2024-12-09 12:04:00.088706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.410 [2024-12-09 12:04:00.088717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.410 [2024-12-09 12:04:00.088725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.410 [2024-12-09 12:04:00.088733] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.410 [2024-12-09 12:04:00.101708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.410 [2024-12-09 12:04:00.102394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-12-09 12:04:00.102457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.410 [2024-12-09 12:04:00.102478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.410 [2024-12-09 12:04:00.102747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.410 [2024-12-09 12:04:00.102974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.410 [2024-12-09 12:04:00.102984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.410 [2024-12-09 12:04:00.102993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.410 [2024-12-09 12:04:00.103002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.410 [2024-12-09 12:04:00.115830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.410 [2024-12-09 12:04:00.116490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-12-09 12:04:00.116519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.410 [2024-12-09 12:04:00.116529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.410 [2024-12-09 12:04:00.116762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.410 [2024-12-09 12:04:00.116986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.410 [2024-12-09 12:04:00.116997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.410 [2024-12-09 12:04:00.117006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.410 [2024-12-09 12:04:00.117016] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.410 [2024-12-09 12:04:00.129729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.410 [2024-12-09 12:04:00.130313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-12-09 12:04:00.130339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.410 [2024-12-09 12:04:00.130347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.410 [2024-12-09 12:04:00.130569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.410 [2024-12-09 12:04:00.130802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.410 [2024-12-09 12:04:00.130821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.410 [2024-12-09 12:04:00.130830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.410 [2024-12-09 12:04:00.130838] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.410 [2024-12-09 12:04:00.143563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.410 [2024-12-09 12:04:00.144057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-12-09 12:04:00.144085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.410 [2024-12-09 12:04:00.144094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.410 [2024-12-09 12:04:00.144314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.410 [2024-12-09 12:04:00.144545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.410 [2024-12-09 12:04:00.144556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.410 [2024-12-09 12:04:00.144566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.410 [2024-12-09 12:04:00.144575] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.410 [2024-12-09 12:04:00.157515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.410 [2024-12-09 12:04:00.158106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-12-09 12:04:00.158131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.410 [2024-12-09 12:04:00.158139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.410 [2024-12-09 12:04:00.158360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.410 [2024-12-09 12:04:00.158579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.410 [2024-12-09 12:04:00.158589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.410 [2024-12-09 12:04:00.158597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.410 [2024-12-09 12:04:00.158605] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.410 [2024-12-09 12:04:00.171317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.410 [2024-12-09 12:04:00.172041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-12-09 12:04:00.172105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.410 [2024-12-09 12:04:00.172119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.410 [2024-12-09 12:04:00.172374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.410 [2024-12-09 12:04:00.172600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.410 [2024-12-09 12:04:00.172611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.411 [2024-12-09 12:04:00.172619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.411 [2024-12-09 12:04:00.172629] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.411 [2024-12-09 12:04:00.185156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.411 [2024-12-09 12:04:00.185619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-12-09 12:04:00.185664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.411 [2024-12-09 12:04:00.185675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.411 [2024-12-09 12:04:00.185900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.411 [2024-12-09 12:04:00.186120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.411 [2024-12-09 12:04:00.186130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.411 [2024-12-09 12:04:00.186146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.411 [2024-12-09 12:04:00.186154] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.411 [2024-12-09 12:04:00.199073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.411 [2024-12-09 12:04:00.199652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-12-09 12:04:00.199678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.411 [2024-12-09 12:04:00.199687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.411 [2024-12-09 12:04:00.199909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.411 [2024-12-09 12:04:00.200128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.411 [2024-12-09 12:04:00.200138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.411 [2024-12-09 12:04:00.200146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.411 [2024-12-09 12:04:00.200154] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.411 [2024-12-09 12:04:00.212891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.411 [2024-12-09 12:04:00.213557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-12-09 12:04:00.213620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.411 [2024-12-09 12:04:00.213633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.411 [2024-12-09 12:04:00.213899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.411 [2024-12-09 12:04:00.214125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.411 [2024-12-09 12:04:00.214135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.411 [2024-12-09 12:04:00.214144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.411 [2024-12-09 12:04:00.214154] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.411 [2024-12-09 12:04:00.226677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.411 [2024-12-09 12:04:00.227394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-12-09 12:04:00.227458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.411 [2024-12-09 12:04:00.227472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.411 [2024-12-09 12:04:00.227741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.411 [2024-12-09 12:04:00.227968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.411 [2024-12-09 12:04:00.227979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.411 [2024-12-09 12:04:00.227987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.411 [2024-12-09 12:04:00.227996] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.411 [2024-12-09 12:04:00.240520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.411 [2024-12-09 12:04:00.241226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-12-09 12:04:00.241290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.411 [2024-12-09 12:04:00.241304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.411 [2024-12-09 12:04:00.241563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.411 [2024-12-09 12:04:00.241805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.411 [2024-12-09 12:04:00.241816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.411 [2024-12-09 12:04:00.241825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.411 [2024-12-09 12:04:00.241834] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.411 [2024-12-09 12:04:00.254342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.411 [2024-12-09 12:04:00.255054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-12-09 12:04:00.255118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.411 [2024-12-09 12:04:00.255131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.411 [2024-12-09 12:04:00.255386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.411 [2024-12-09 12:04:00.255612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.411 [2024-12-09 12:04:00.255622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.411 [2024-12-09 12:04:00.255630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.411 [2024-12-09 12:04:00.255651] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.411 [2024-12-09 12:04:00.268153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.411 [2024-12-09 12:04:00.268780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-12-09 12:04:00.268843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.411 [2024-12-09 12:04:00.268856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.411 [2024-12-09 12:04:00.269110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.411 [2024-12-09 12:04:00.269336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.411 [2024-12-09 12:04:00.269347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.411 [2024-12-09 12:04:00.269356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.411 [2024-12-09 12:04:00.269365] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.411 [2024-12-09 12:04:00.282090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.411 [2024-12-09 12:04:00.282598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-12-09 12:04:00.282628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.411 [2024-12-09 12:04:00.282655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.411 [2024-12-09 12:04:00.282879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.411 [2024-12-09 12:04:00.283101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.411 [2024-12-09 12:04:00.283112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.411 [2024-12-09 12:04:00.283120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.411 [2024-12-09 12:04:00.283128] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.674 [2024-12-09 12:04:00.296032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.674 [2024-12-09 12:04:00.296656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.674 [2024-12-09 12:04:00.296682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.674 [2024-12-09 12:04:00.296691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.674 [2024-12-09 12:04:00.296912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.674 [2024-12-09 12:04:00.297132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.674 [2024-12-09 12:04:00.297141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.674 [2024-12-09 12:04:00.297148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.674 [2024-12-09 12:04:00.297156] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.674 [2024-12-09 12:04:00.309868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.674 [2024-12-09 12:04:00.310434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.674 [2024-12-09 12:04:00.310458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.674 [2024-12-09 12:04:00.310467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.674 [2024-12-09 12:04:00.310692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.674 [2024-12-09 12:04:00.310913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.674 [2024-12-09 12:04:00.310924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.674 [2024-12-09 12:04:00.310931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.674 [2024-12-09 12:04:00.310940] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.674 [2024-12-09 12:04:00.323833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.674 [2024-12-09 12:04:00.324394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.674 [2024-12-09 12:04:00.324416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.674 [2024-12-09 12:04:00.324425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.674 [2024-12-09 12:04:00.324654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.674 [2024-12-09 12:04:00.324881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.674 [2024-12-09 12:04:00.324891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.674 [2024-12-09 12:04:00.324898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.674 [2024-12-09 12:04:00.324906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.674 [2024-12-09 12:04:00.337785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.674 [2024-12-09 12:04:00.338455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.674 [2024-12-09 12:04:00.338514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.674 [2024-12-09 12:04:00.338527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.674 [2024-12-09 12:04:00.338793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.674 [2024-12-09 12:04:00.339019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.674 [2024-12-09 12:04:00.339030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.674 [2024-12-09 12:04:00.339039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.674 [2024-12-09 12:04:00.339048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.674 8612.67 IOPS, 33.64 MiB/s [2024-12-09T11:04:00.560Z] [2024-12-09 12:04:00.351555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.674 [2024-12-09 12:04:00.352261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.674 [2024-12-09 12:04:00.352319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.674 [2024-12-09 12:04:00.352334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.674 [2024-12-09 12:04:00.352587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.674 [2024-12-09 12:04:00.352825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.674 [2024-12-09 12:04:00.352836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.674 [2024-12-09 12:04:00.352845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.674 [2024-12-09 12:04:00.352854] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.674 [2024-12-09 12:04:00.365402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.674 [2024-12-09 12:04:00.366136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.674 [2024-12-09 12:04:00.366198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.675 [2024-12-09 12:04:00.366212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.675 [2024-12-09 12:04:00.366466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.675 [2024-12-09 12:04:00.366703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.675 [2024-12-09 12:04:00.366713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.675 [2024-12-09 12:04:00.366728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.675 [2024-12-09 12:04:00.366737] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.675 [2024-12-09 12:04:00.379231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.675 [2024-12-09 12:04:00.379785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.675 [2024-12-09 12:04:00.379843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.675 [2024-12-09 12:04:00.379857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.675 [2024-12-09 12:04:00.380111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.675 [2024-12-09 12:04:00.380337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.675 [2024-12-09 12:04:00.380346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.675 [2024-12-09 12:04:00.380355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.675 [2024-12-09 12:04:00.380363] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.675 [2024-12-09 12:04:00.393076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.675 [2024-12-09 12:04:00.393579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.675 [2024-12-09 12:04:00.393606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.675 [2024-12-09 12:04:00.393614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.675 [2024-12-09 12:04:00.393844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.675 [2024-12-09 12:04:00.394064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.675 [2024-12-09 12:04:00.394075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.675 [2024-12-09 12:04:00.394082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.675 [2024-12-09 12:04:00.394090] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.675 [2024-12-09 12:04:00.406996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.675 [2024-12-09 12:04:00.407607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.675 [2024-12-09 12:04:00.407630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.675 [2024-12-09 12:04:00.407648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.675 [2024-12-09 12:04:00.407868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.675 [2024-12-09 12:04:00.408087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.675 [2024-12-09 12:04:00.408097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.675 [2024-12-09 12:04:00.408105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.675 [2024-12-09 12:04:00.408112] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.675 [2024-12-09 12:04:00.420821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.675 [2024-12-09 12:04:00.421388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.675 [2024-12-09 12:04:00.421410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.675 [2024-12-09 12:04:00.421418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.675 [2024-12-09 12:04:00.421643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.675 [2024-12-09 12:04:00.421863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.675 [2024-12-09 12:04:00.421872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.675 [2024-12-09 12:04:00.421879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.675 [2024-12-09 12:04:00.421887] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.675 [2024-12-09 12:04:00.434769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.675 [2024-12-09 12:04:00.435223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.675 [2024-12-09 12:04:00.435243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.675 [2024-12-09 12:04:00.435252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.675 [2024-12-09 12:04:00.435470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.675 [2024-12-09 12:04:00.435696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.675 [2024-12-09 12:04:00.435707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.675 [2024-12-09 12:04:00.435714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.675 [2024-12-09 12:04:00.435722] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.675 [2024-12-09 12:04:00.448616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.675 [2024-12-09 12:04:00.449169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.675 [2024-12-09 12:04:00.449192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.675 [2024-12-09 12:04:00.449199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.675 [2024-12-09 12:04:00.449418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.675 [2024-12-09 12:04:00.449636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.675 [2024-12-09 12:04:00.449652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.675 [2024-12-09 12:04:00.449659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.675 [2024-12-09 12:04:00.449666] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.675 [2024-12-09 12:04:00.462553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.675 [2024-12-09 12:04:00.463185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.675 [2024-12-09 12:04:00.463244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.675 [2024-12-09 12:04:00.463260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.675 [2024-12-09 12:04:00.463507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.675 [2024-12-09 12:04:00.463742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.675 [2024-12-09 12:04:00.463753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.675 [2024-12-09 12:04:00.463761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.675 [2024-12-09 12:04:00.463769] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.675 [2024-12-09 12:04:00.476469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.675 [2024-12-09 12:04:00.477170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.675 [2024-12-09 12:04:00.477220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.675 [2024-12-09 12:04:00.477233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.675 [2024-12-09 12:04:00.477478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.675 [2024-12-09 12:04:00.477713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.675 [2024-12-09 12:04:00.477724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.675 [2024-12-09 12:04:00.477732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.675 [2024-12-09 12:04:00.477741] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.675 [2024-12-09 12:04:00.490429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.675 [2024-12-09 12:04:00.491059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.675 [2024-12-09 12:04:00.491108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.675 [2024-12-09 12:04:00.491120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.675 [2024-12-09 12:04:00.491363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.675 [2024-12-09 12:04:00.491587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.675 [2024-12-09 12:04:00.491596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.675 [2024-12-09 12:04:00.491604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.675 [2024-12-09 12:04:00.491612] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.675 [2024-12-09 12:04:00.504341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.675 [2024-12-09 12:04:00.505014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.675 [2024-12-09 12:04:00.505062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.676 [2024-12-09 12:04:00.505074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.676 [2024-12-09 12:04:00.505331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.676 [2024-12-09 12:04:00.505556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.676 [2024-12-09 12:04:00.505566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.676 [2024-12-09 12:04:00.505573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.676 [2024-12-09 12:04:00.505582] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.676 [2024-12-09 12:04:00.518305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.676 [2024-12-09 12:04:00.518936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.676 [2024-12-09 12:04:00.518962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.676 [2024-12-09 12:04:00.518972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.676 [2024-12-09 12:04:00.519192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.676 [2024-12-09 12:04:00.519411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.676 [2024-12-09 12:04:00.519420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.676 [2024-12-09 12:04:00.519428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.676 [2024-12-09 12:04:00.519435] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.676 [2024-12-09 12:04:00.532134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.676 [2024-12-09 12:04:00.532708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.676 [2024-12-09 12:04:00.532743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.676 [2024-12-09 12:04:00.532752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.676 [2024-12-09 12:04:00.532983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.676 [2024-12-09 12:04:00.533203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.676 [2024-12-09 12:04:00.533213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.676 [2024-12-09 12:04:00.533221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.676 [2024-12-09 12:04:00.533228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.676 [2024-12-09 12:04:00.546109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.676 [2024-12-09 12:04:00.546746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.676 [2024-12-09 12:04:00.546798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.676 [2024-12-09 12:04:00.546810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.676 [2024-12-09 12:04:00.547057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.676 [2024-12-09 12:04:00.547281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.676 [2024-12-09 12:04:00.547290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.676 [2024-12-09 12:04:00.547305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.676 [2024-12-09 12:04:00.547314] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.938 [2024-12-09 12:04:00.560031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.938 [2024-12-09 12:04:00.560525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.938 [2024-12-09 12:04:00.560551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.938 [2024-12-09 12:04:00.560559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.938 [2024-12-09 12:04:00.560789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.938 [2024-12-09 12:04:00.561011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.938 [2024-12-09 12:04:00.561020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.938 [2024-12-09 12:04:00.561027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.938 [2024-12-09 12:04:00.561035] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.938 [2024-12-09 12:04:00.573940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.938 [2024-12-09 12:04:00.574498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.938 [2024-12-09 12:04:00.574518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.938 [2024-12-09 12:04:00.574527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.938 [2024-12-09 12:04:00.574751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.938 [2024-12-09 12:04:00.574971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.938 [2024-12-09 12:04:00.574987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.938 [2024-12-09 12:04:00.574995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.938 [2024-12-09 12:04:00.575002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.938 [2024-12-09 12:04:00.587906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.938 [2024-12-09 12:04:00.588461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.938 [2024-12-09 12:04:00.588482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.938 [2024-12-09 12:04:00.588490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.938 [2024-12-09 12:04:00.588718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.938 [2024-12-09 12:04:00.588938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.938 [2024-12-09 12:04:00.588948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.938 [2024-12-09 12:04:00.588955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.938 [2024-12-09 12:04:00.588963] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.938 [2024-12-09 12:04:00.601888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.938 [2024-12-09 12:04:00.602441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.938 [2024-12-09 12:04:00.602462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.938 [2024-12-09 12:04:00.602471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.938 [2024-12-09 12:04:00.602698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.938 [2024-12-09 12:04:00.602919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.938 [2024-12-09 12:04:00.602930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.938 [2024-12-09 12:04:00.602938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.938 [2024-12-09 12:04:00.602946] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.938 [2024-12-09 12:04:00.615824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.938 [2024-12-09 12:04:00.616277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.938 [2024-12-09 12:04:00.616300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.938 [2024-12-09 12:04:00.616309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.938 [2024-12-09 12:04:00.616527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.938 [2024-12-09 12:04:00.616754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.938 [2024-12-09 12:04:00.616764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.938 [2024-12-09 12:04:00.616772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.938 [2024-12-09 12:04:00.616779] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.938 [2024-12-09 12:04:00.629684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.938 [2024-12-09 12:04:00.630334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.938 [2024-12-09 12:04:00.630389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.938 [2024-12-09 12:04:00.630401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.938 [2024-12-09 12:04:00.630660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.938 [2024-12-09 12:04:00.630886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.938 [2024-12-09 12:04:00.630897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.938 [2024-12-09 12:04:00.630907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.938 [2024-12-09 12:04:00.630919] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.938 [2024-12-09 12:04:00.643645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.938 [2024-12-09 12:04:00.644231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.938 [2024-12-09 12:04:00.644265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.938 [2024-12-09 12:04:00.644274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.938 [2024-12-09 12:04:00.644495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.938 [2024-12-09 12:04:00.644725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.938 [2024-12-09 12:04:00.644737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.938 [2024-12-09 12:04:00.644744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.938 [2024-12-09 12:04:00.644752] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.938 [2024-12-09 12:04:00.657434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.938 [2024-12-09 12:04:00.658052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.938 [2024-12-09 12:04:00.658078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.938 [2024-12-09 12:04:00.658087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.938 [2024-12-09 12:04:00.658307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.938 [2024-12-09 12:04:00.658526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.938 [2024-12-09 12:04:00.658536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.938 [2024-12-09 12:04:00.658543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.938 [2024-12-09 12:04:00.658551] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.938 [2024-12-09 12:04:00.671243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.938 [2024-12-09 12:04:00.671938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.938 [2024-12-09 12:04:00.672000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.938 [2024-12-09 12:04:00.672014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.938 [2024-12-09 12:04:00.672267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.938 [2024-12-09 12:04:00.672493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.938 [2024-12-09 12:04:00.672502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.939 [2024-12-09 12:04:00.672511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.939 [2024-12-09 12:04:00.672520] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.939 [2024-12-09 12:04:00.685029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.939 [2024-12-09 12:04:00.685724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.939 [2024-12-09 12:04:00.685789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.939 [2024-12-09 12:04:00.685803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.939 [2024-12-09 12:04:00.686068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.939 [2024-12-09 12:04:00.686294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.939 [2024-12-09 12:04:00.686305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.939 [2024-12-09 12:04:00.686313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.939 [2024-12-09 12:04:00.686323] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.939 [2024-12-09 12:04:00.698836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.939 [2024-12-09 12:04:00.699540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.939 [2024-12-09 12:04:00.699603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.939 [2024-12-09 12:04:00.699617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.939 [2024-12-09 12:04:00.699884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.939 [2024-12-09 12:04:00.700110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.939 [2024-12-09 12:04:00.700120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.939 [2024-12-09 12:04:00.700128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.939 [2024-12-09 12:04:00.700137] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.939 [2024-12-09 12:04:00.712655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.939 [2024-12-09 12:04:00.713264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.939 [2024-12-09 12:04:00.713320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.939 [2024-12-09 12:04:00.713331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.939 [2024-12-09 12:04:00.713514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.939 [2024-12-09 12:04:00.713684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.939 [2024-12-09 12:04:00.713693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.939 [2024-12-09 12:04:00.713699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.939 [2024-12-09 12:04:00.713706] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.939 [2024-12-09 12:04:00.725273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.939 [2024-12-09 12:04:00.726013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.939 [2024-12-09 12:04:00.726066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.939 [2024-12-09 12:04:00.726075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.939 [2024-12-09 12:04:00.726256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.939 [2024-12-09 12:04:00.726412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.939 [2024-12-09 12:04:00.726420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.939 [2024-12-09 12:04:00.726432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.939 [2024-12-09 12:04:00.726440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.939 [2024-12-09 12:04:00.738014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.939 [2024-12-09 12:04:00.738633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.939 [2024-12-09 12:04:00.738688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.939 [2024-12-09 12:04:00.738697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.939 [2024-12-09 12:04:00.738876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.939 [2024-12-09 12:04:00.739031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.939 [2024-12-09 12:04:00.739038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.939 [2024-12-09 12:04:00.739045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.939 [2024-12-09 12:04:00.739052] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.939 [2024-12-09 12:04:00.750618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.939 [2024-12-09 12:04:00.751226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.939 [2024-12-09 12:04:00.751272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.939 [2024-12-09 12:04:00.751281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.939 [2024-12-09 12:04:00.751457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.939 [2024-12-09 12:04:00.751612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.939 [2024-12-09 12:04:00.751619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.939 [2024-12-09 12:04:00.751624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.939 [2024-12-09 12:04:00.751631] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.939 [2024-12-09 12:04:00.763327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.939 [2024-12-09 12:04:00.763966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.939 [2024-12-09 12:04:00.764009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.939 [2024-12-09 12:04:00.764018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.939 [2024-12-09 12:04:00.764192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.939 [2024-12-09 12:04:00.764347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.939 [2024-12-09 12:04:00.764354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.939 [2024-12-09 12:04:00.764359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.939 [2024-12-09 12:04:00.764365] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.939 [2024-12-09 12:04:00.776061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.939 [2024-12-09 12:04:00.776558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.939 [2024-12-09 12:04:00.776577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.939 [2024-12-09 12:04:00.776583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.939 [2024-12-09 12:04:00.776741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.939 [2024-12-09 12:04:00.776893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.939 [2024-12-09 12:04:00.776899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.939 [2024-12-09 12:04:00.776904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.939 [2024-12-09 12:04:00.776909] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.939 [2024-12-09 12:04:00.788716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.939 [2024-12-09 12:04:00.789282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.939 [2024-12-09 12:04:00.789320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.939 [2024-12-09 12:04:00.789329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.939 [2024-12-09 12:04:00.789500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.939 [2024-12-09 12:04:00.789664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.939 [2024-12-09 12:04:00.789672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.939 [2024-12-09 12:04:00.789678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.939 [2024-12-09 12:04:00.789684] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.939 [2024-12-09 12:04:00.801410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.939 [2024-12-09 12:04:00.802032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.939 [2024-12-09 12:04:00.802069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.939 [2024-12-09 12:04:00.802077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.939 [2024-12-09 12:04:00.802248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.939 [2024-12-09 12:04:00.802401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.939 [2024-12-09 12:04:00.802408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.939 [2024-12-09 12:04:00.802413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.940 [2024-12-09 12:04:00.802419] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:52.940 [2024-12-09 12:04:00.814107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:52.940 [2024-12-09 12:04:00.814594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.940 [2024-12-09 12:04:00.814633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:52.940 [2024-12-09 12:04:00.814651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:52.940 [2024-12-09 12:04:00.814820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:52.940 [2024-12-09 12:04:00.814974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:52.940 [2024-12-09 12:04:00.814981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:52.940 [2024-12-09 12:04:00.814987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:52.940 [2024-12-09 12:04:00.814993] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.201 [2024-12-09 12:04:00.826810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.201 [2024-12-09 12:04:00.827400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.201 [2024-12-09 12:04:00.827435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.201 [2024-12-09 12:04:00.827443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.201 [2024-12-09 12:04:00.827612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.201 [2024-12-09 12:04:00.827773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.201 [2024-12-09 12:04:00.827781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.201 [2024-12-09 12:04:00.827786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.201 [2024-12-09 12:04:00.827792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.201 [2024-12-09 12:04:00.839465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.201 [2024-12-09 12:04:00.840059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.201 [2024-12-09 12:04:00.840091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.201 [2024-12-09 12:04:00.840100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.201 [2024-12-09 12:04:00.840267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.201 [2024-12-09 12:04:00.840420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.201 [2024-12-09 12:04:00.840427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.201 [2024-12-09 12:04:00.840432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.201 [2024-12-09 12:04:00.840438] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.201 [2024-12-09 12:04:00.852120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.201 [2024-12-09 12:04:00.852705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.201 [2024-12-09 12:04:00.852737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.201 [2024-12-09 12:04:00.852746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.201 [2024-12-09 12:04:00.852918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.201 [2024-12-09 12:04:00.853071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.201 [2024-12-09 12:04:00.853077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.201 [2024-12-09 12:04:00.853083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.201 [2024-12-09 12:04:00.853089] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.201 [2024-12-09 12:04:00.864764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.201 [2024-12-09 12:04:00.865364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.201 [2024-12-09 12:04:00.865396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.201 [2024-12-09 12:04:00.865405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.201 [2024-12-09 12:04:00.865571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.201 [2024-12-09 12:04:00.865734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.201 [2024-12-09 12:04:00.865742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.201 [2024-12-09 12:04:00.865747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.201 [2024-12-09 12:04:00.865753] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.201 [2024-12-09 12:04:00.877430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.201 [2024-12-09 12:04:00.878050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.201 [2024-12-09 12:04:00.878080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.201 [2024-12-09 12:04:00.878089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.201 [2024-12-09 12:04:00.878257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.201 [2024-12-09 12:04:00.878411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.201 [2024-12-09 12:04:00.878417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.201 [2024-12-09 12:04:00.878422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.201 [2024-12-09 12:04:00.878428] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.202 [2024-12-09 12:04:00.890097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.202 [2024-12-09 12:04:00.890684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.202 [2024-12-09 12:04:00.890715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.202 [2024-12-09 12:04:00.890724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.202 [2024-12-09 12:04:00.890892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.202 [2024-12-09 12:04:00.891045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.202 [2024-12-09 12:04:00.891052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.202 [2024-12-09 12:04:00.891061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.202 [2024-12-09 12:04:00.891068] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.202 [2024-12-09 12:04:00.902708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.202 [2024-12-09 12:04:00.903260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.202 [2024-12-09 12:04:00.903291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.202 [2024-12-09 12:04:00.903300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.202 [2024-12-09 12:04:00.903465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.202 [2024-12-09 12:04:00.903618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.202 [2024-12-09 12:04:00.903624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.202 [2024-12-09 12:04:00.903630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.202 [2024-12-09 12:04:00.903635] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.202 [2024-12-09 12:04:00.915312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.202 [2024-12-09 12:04:00.915932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.202 [2024-12-09 12:04:00.915962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.202 [2024-12-09 12:04:00.915971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.202 [2024-12-09 12:04:00.916137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.202 [2024-12-09 12:04:00.916289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.202 [2024-12-09 12:04:00.916296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.202 [2024-12-09 12:04:00.916302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.202 [2024-12-09 12:04:00.916308] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.202 [2024-12-09 12:04:00.927981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.202 [2024-12-09 12:04:00.928416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.202 [2024-12-09 12:04:00.928446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.202 [2024-12-09 12:04:00.928455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.202 [2024-12-09 12:04:00.928621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.202 [2024-12-09 12:04:00.928780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.202 [2024-12-09 12:04:00.928787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.202 [2024-12-09 12:04:00.928793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.202 [2024-12-09 12:04:00.928799] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.202 [2024-12-09 12:04:00.940616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.202 [2024-12-09 12:04:00.941233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.202 [2024-12-09 12:04:00.941263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.202 [2024-12-09 12:04:00.941272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.202 [2024-12-09 12:04:00.941438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.202 [2024-12-09 12:04:00.941590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.202 [2024-12-09 12:04:00.941597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.202 [2024-12-09 12:04:00.941602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.202 [2024-12-09 12:04:00.941608] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.202 [2024-12-09 12:04:00.953281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.202 [2024-12-09 12:04:00.953754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.202 [2024-12-09 12:04:00.953785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.202 [2024-12-09 12:04:00.953794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.202 [2024-12-09 12:04:00.953962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.202 [2024-12-09 12:04:00.954115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.202 [2024-12-09 12:04:00.954121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.202 [2024-12-09 12:04:00.954126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.202 [2024-12-09 12:04:00.954132] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.202 [2024-12-09 12:04:00.965941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.202 [2024-12-09 12:04:00.966511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.202 [2024-12-09 12:04:00.966542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.202 [2024-12-09 12:04:00.966550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.202 [2024-12-09 12:04:00.966723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.202 [2024-12-09 12:04:00.966876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.202 [2024-12-09 12:04:00.966882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.202 [2024-12-09 12:04:00.966888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.202 [2024-12-09 12:04:00.966893] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.202 [2024-12-09 12:04:00.978553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.202 [2024-12-09 12:04:00.979163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.202 [2024-12-09 12:04:00.979197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.202 [2024-12-09 12:04:00.979205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.202 [2024-12-09 12:04:00.979371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.202 [2024-12-09 12:04:00.979524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.202 [2024-12-09 12:04:00.979530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.202 [2024-12-09 12:04:00.979536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.202 [2024-12-09 12:04:00.979541] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.202 [2024-12-09 12:04:00.991214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.202 [2024-12-09 12:04:00.991829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.202 [2024-12-09 12:04:00.991859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.202 [2024-12-09 12:04:00.991868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.202 [2024-12-09 12:04:00.992033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.202 [2024-12-09 12:04:00.992186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.202 [2024-12-09 12:04:00.992192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.202 [2024-12-09 12:04:00.992198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.202 [2024-12-09 12:04:00.992204] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.202 [2024-12-09 12:04:01.003883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.202 [2024-12-09 12:04:01.004476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.202 [2024-12-09 12:04:01.004507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.202 [2024-12-09 12:04:01.004516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.202 [2024-12-09 12:04:01.004688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.202 [2024-12-09 12:04:01.004842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.202 [2024-12-09 12:04:01.004848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.202 [2024-12-09 12:04:01.004854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.202 [2024-12-09 12:04:01.004859] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.202 [2024-12-09 12:04:01.016518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.202 [2024-12-09 12:04:01.017077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.202 [2024-12-09 12:04:01.017107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.202 [2024-12-09 12:04:01.017116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.202 [2024-12-09 12:04:01.017282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.202 [2024-12-09 12:04:01.017439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.202 [2024-12-09 12:04:01.017445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.202 [2024-12-09 12:04:01.017451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.202 [2024-12-09 12:04:01.017456] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.202 [2024-12-09 12:04:01.029122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.202 [2024-12-09 12:04:01.029713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.202 [2024-12-09 12:04:01.029743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.202 [2024-12-09 12:04:01.029752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.202 [2024-12-09 12:04:01.029920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.202 [2024-12-09 12:04:01.030072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.202 [2024-12-09 12:04:01.030079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.202 [2024-12-09 12:04:01.030084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.202 [2024-12-09 12:04:01.030090] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.202 [2024-12-09 12:04:01.041761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.202 [2024-12-09 12:04:01.042341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.202 [2024-12-09 12:04:01.042371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.202 [2024-12-09 12:04:01.042380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.202 [2024-12-09 12:04:01.042546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.202 [2024-12-09 12:04:01.042704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.202 [2024-12-09 12:04:01.042711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.202 [2024-12-09 12:04:01.042717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.202 [2024-12-09 12:04:01.042722] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.202 [2024-12-09 12:04:01.054404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.202 [2024-12-09 12:04:01.055005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.202 [2024-12-09 12:04:01.055036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.202 [2024-12-09 12:04:01.055045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.202 [2024-12-09 12:04:01.055211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.202 [2024-12-09 12:04:01.055363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.203 [2024-12-09 12:04:01.055369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.203 [2024-12-09 12:04:01.055378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.203 [2024-12-09 12:04:01.055384] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.203 [2024-12-09 12:04:01.067060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.203 [2024-12-09 12:04:01.067665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.203 [2024-12-09 12:04:01.067696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.203 [2024-12-09 12:04:01.067705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.203 [2024-12-09 12:04:01.067871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.203 [2024-12-09 12:04:01.068023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.203 [2024-12-09 12:04:01.068030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.203 [2024-12-09 12:04:01.068035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.203 [2024-12-09 12:04:01.068041] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.203 [2024-12-09 12:04:01.079720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.203 [2024-12-09 12:04:01.080250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.203 [2024-12-09 12:04:01.080280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.203 [2024-12-09 12:04:01.080289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.203 [2024-12-09 12:04:01.080454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.203 [2024-12-09 12:04:01.080607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.203 [2024-12-09 12:04:01.080614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.203 [2024-12-09 12:04:01.080619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.203 [2024-12-09 12:04:01.080625] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.464 [2024-12-09 12:04:01.092439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.464 [2024-12-09 12:04:01.092994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.464 [2024-12-09 12:04:01.093025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.464 [2024-12-09 12:04:01.093034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.464 [2024-12-09 12:04:01.093200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.464 [2024-12-09 12:04:01.093353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.464 [2024-12-09 12:04:01.093359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.464 [2024-12-09 12:04:01.093365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.464 [2024-12-09 12:04:01.093370] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.464 [2024-12-09 12:04:01.105056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.464 [2024-12-09 12:04:01.105626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.464 [2024-12-09 12:04:01.105662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.464 [2024-12-09 12:04:01.105671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.464 [2024-12-09 12:04:01.105838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.464 [2024-12-09 12:04:01.105990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.464 [2024-12-09 12:04:01.105997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.464 [2024-12-09 12:04:01.106003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.464 [2024-12-09 12:04:01.106008] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.464 [2024-12-09 12:04:01.117680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.464 [2024-12-09 12:04:01.118258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.464 [2024-12-09 12:04:01.118288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.464 [2024-12-09 12:04:01.118297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.464 [2024-12-09 12:04:01.118463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.464 [2024-12-09 12:04:01.118616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.464 [2024-12-09 12:04:01.118622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.464 [2024-12-09 12:04:01.118627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.464 [2024-12-09 12:04:01.118633] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.464 [2024-12-09 12:04:01.130306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.464 [2024-12-09 12:04:01.130895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.464 [2024-12-09 12:04:01.130926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.464 [2024-12-09 12:04:01.130935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.464 [2024-12-09 12:04:01.131101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.464 [2024-12-09 12:04:01.131254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.464 [2024-12-09 12:04:01.131260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.464 [2024-12-09 12:04:01.131265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.464 [2024-12-09 12:04:01.131271] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.464 [2024-12-09 12:04:01.142953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.464 [2024-12-09 12:04:01.143439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.464 [2024-12-09 12:04:01.143455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.464 [2024-12-09 12:04:01.143467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.464 [2024-12-09 12:04:01.143618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.464 [2024-12-09 12:04:01.143774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.464 [2024-12-09 12:04:01.143780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.464 [2024-12-09 12:04:01.143785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.464 [2024-12-09 12:04:01.143790] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.464 [2024-12-09 12:04:01.155593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.464 [2024-12-09 12:04:01.156169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.464 [2024-12-09 12:04:01.156199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.464 [2024-12-09 12:04:01.156208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.464 [2024-12-09 12:04:01.156374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.464 [2024-12-09 12:04:01.156527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.464 [2024-12-09 12:04:01.156533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.464 [2024-12-09 12:04:01.156539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.464 [2024-12-09 12:04:01.156544] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.464 [2024-12-09 12:04:01.168209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.464 [2024-12-09 12:04:01.168738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.464 [2024-12-09 12:04:01.168768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.464 [2024-12-09 12:04:01.168777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.464 [2024-12-09 12:04:01.168945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.464 [2024-12-09 12:04:01.169097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.464 [2024-12-09 12:04:01.169104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.464 [2024-12-09 12:04:01.169109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.464 [2024-12-09 12:04:01.169115] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.464 [2024-12-09 12:04:01.180974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.464 [2024-12-09 12:04:01.181528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.464 [2024-12-09 12:04:01.181558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.464 [2024-12-09 12:04:01.181567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.464 [2024-12-09 12:04:01.181739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.464 [2024-12-09 12:04:01.181896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.464 [2024-12-09 12:04:01.181903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.464 [2024-12-09 12:04:01.181908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.464 [2024-12-09 12:04:01.181914] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.464 [2024-12-09 12:04:01.193581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.464 [2024-12-09 12:04:01.194155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.464 [2024-12-09 12:04:01.194185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.464 [2024-12-09 12:04:01.194194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.464 [2024-12-09 12:04:01.194360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.464 [2024-12-09 12:04:01.194513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.464 [2024-12-09 12:04:01.194519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.464 [2024-12-09 12:04:01.194524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.464 [2024-12-09 12:04:01.194530] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.464 [2024-12-09 12:04:01.206203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.464 [2024-12-09 12:04:01.206695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.464 [2024-12-09 12:04:01.206725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.464 [2024-12-09 12:04:01.206734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.464 [2024-12-09 12:04:01.206903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.464 [2024-12-09 12:04:01.207055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.464 [2024-12-09 12:04:01.207062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.464 [2024-12-09 12:04:01.207067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.464 [2024-12-09 12:04:01.207073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.464 [2024-12-09 12:04:01.218888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.464 [2024-12-09 12:04:01.219464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.464 [2024-12-09 12:04:01.219493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.464 [2024-12-09 12:04:01.219502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.464 [2024-12-09 12:04:01.219674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.464 [2024-12-09 12:04:01.219828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.464 [2024-12-09 12:04:01.219835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.464 [2024-12-09 12:04:01.219844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.464 [2024-12-09 12:04:01.219849] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.464 [2024-12-09 12:04:01.231509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.464 [2024-12-09 12:04:01.232065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.464 [2024-12-09 12:04:01.232095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.464 [2024-12-09 12:04:01.232104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.464 [2024-12-09 12:04:01.232270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.464 [2024-12-09 12:04:01.232422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.464 [2024-12-09 12:04:01.232429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.464 [2024-12-09 12:04:01.232434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.464 [2024-12-09 12:04:01.232440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.464 [2024-12-09 12:04:01.244112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.464 [2024-12-09 12:04:01.244612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.464 [2024-12-09 12:04:01.244627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.464 [2024-12-09 12:04:01.244633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.464 [2024-12-09 12:04:01.244790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.464 [2024-12-09 12:04:01.244947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.464 [2024-12-09 12:04:01.244953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.464 [2024-12-09 12:04:01.244958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.464 [2024-12-09 12:04:01.244963] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.464 [2024-12-09 12:04:01.256759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.464 [2024-12-09 12:04:01.257245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.464 [2024-12-09 12:04:01.257258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.464 [2024-12-09 12:04:01.257263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.464 [2024-12-09 12:04:01.257413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.464 [2024-12-09 12:04:01.257562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.464 [2024-12-09 12:04:01.257568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.464 [2024-12-09 12:04:01.257573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.465 [2024-12-09 12:04:01.257578] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.465 [2024-12-09 12:04:01.269376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.465 [2024-12-09 12:04:01.269924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.465 [2024-12-09 12:04:01.269955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.465 [2024-12-09 12:04:01.269964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.465 [2024-12-09 12:04:01.270131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.465 [2024-12-09 12:04:01.270283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.465 [2024-12-09 12:04:01.270290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.465 [2024-12-09 12:04:01.270296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.465 [2024-12-09 12:04:01.270301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.465 [2024-12-09 12:04:01.281968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.465 [2024-12-09 12:04:01.282546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.465 [2024-12-09 12:04:01.282576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.465 [2024-12-09 12:04:01.282585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.465 [2024-12-09 12:04:01.282757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.465 [2024-12-09 12:04:01.282911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.465 [2024-12-09 12:04:01.282917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.465 [2024-12-09 12:04:01.282922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.465 [2024-12-09 12:04:01.282928] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.465 [2024-12-09 12:04:01.294589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.465 [2024-12-09 12:04:01.295184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.465 [2024-12-09 12:04:01.295214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.465 [2024-12-09 12:04:01.295223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.465 [2024-12-09 12:04:01.295389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.465 [2024-12-09 12:04:01.295541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.465 [2024-12-09 12:04:01.295547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.465 [2024-12-09 12:04:01.295552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.465 [2024-12-09 12:04:01.295558] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.465 [2024-12-09 12:04:01.307234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.465 [2024-12-09 12:04:01.307749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.465 [2024-12-09 12:04:01.307780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.465 [2024-12-09 12:04:01.307792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.465 [2024-12-09 12:04:01.307960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.465 [2024-12-09 12:04:01.308113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.465 [2024-12-09 12:04:01.308120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.465 [2024-12-09 12:04:01.308125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.465 [2024-12-09 12:04:01.308131] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.465 [2024-12-09 12:04:01.319940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.465 [2024-12-09 12:04:01.320496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.465 [2024-12-09 12:04:01.320526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.465 [2024-12-09 12:04:01.320535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.465 [2024-12-09 12:04:01.320708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.465 [2024-12-09 12:04:01.320861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.465 [2024-12-09 12:04:01.320868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.465 [2024-12-09 12:04:01.320873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.465 [2024-12-09 12:04:01.320879] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.465 [2024-12-09 12:04:01.332535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.465 [2024-12-09 12:04:01.333110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.465 [2024-12-09 12:04:01.333140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.465 [2024-12-09 12:04:01.333149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.465 [2024-12-09 12:04:01.333314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.465 [2024-12-09 12:04:01.333467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.465 [2024-12-09 12:04:01.333473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.465 [2024-12-09 12:04:01.333479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.465 [2024-12-09 12:04:01.333484] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.465 6459.50 IOPS, 25.23 MiB/s [2024-12-09T11:04:01.351Z] [2024-12-09 12:04:01.345163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.465 [2024-12-09 12:04:01.345660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.465 [2024-12-09 12:04:01.345675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.465 [2024-12-09 12:04:01.345681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.465 [2024-12-09 12:04:01.345834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.465 [2024-12-09 12:04:01.345985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.465 [2024-12-09 12:04:01.345991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.465 [2024-12-09 12:04:01.345996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.465 [2024-12-09 12:04:01.346000] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.727 [2024-12-09 12:04:01.357815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.727 [2024-12-09 12:04:01.358380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.727 [2024-12-09 12:04:01.358411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.727 [2024-12-09 12:04:01.358420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.727 [2024-12-09 12:04:01.358753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.727 [2024-12-09 12:04:01.358909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.727 [2024-12-09 12:04:01.358915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.727 [2024-12-09 12:04:01.358920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.727 [2024-12-09 12:04:01.358926] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.727 [2024-12-09 12:04:01.370447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.727 [2024-12-09 12:04:01.371033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.727 [2024-12-09 12:04:01.371063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.727 [2024-12-09 12:04:01.371072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.727 [2024-12-09 12:04:01.371238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.727 [2024-12-09 12:04:01.371391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.727 [2024-12-09 12:04:01.371397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.727 [2024-12-09 12:04:01.371402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.727 [2024-12-09 12:04:01.371408] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.727 [2024-12-09 12:04:01.383076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.727 [2024-12-09 12:04:01.383722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.727 [2024-12-09 12:04:01.383753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.727 [2024-12-09 12:04:01.383762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.727 [2024-12-09 12:04:01.383928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.727 [2024-12-09 12:04:01.384081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.727 [2024-12-09 12:04:01.384088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.727 [2024-12-09 12:04:01.384097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.727 [2024-12-09 12:04:01.384102] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.727 [2024-12-09 12:04:01.395785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.727 [2024-12-09 12:04:01.396251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.727 [2024-12-09 12:04:01.396266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.727 [2024-12-09 12:04:01.396272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.727 [2024-12-09 12:04:01.396421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.727 [2024-12-09 12:04:01.396571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.727 [2024-12-09 12:04:01.396577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.727 [2024-12-09 12:04:01.396582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.727 [2024-12-09 12:04:01.396586] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.727 [2024-12-09 12:04:01.408389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.727 [2024-12-09 12:04:01.409028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.727 [2024-12-09 12:04:01.409058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.727 [2024-12-09 12:04:01.409067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.727 [2024-12-09 12:04:01.409232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.727 [2024-12-09 12:04:01.409385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.727 [2024-12-09 12:04:01.409392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.727 [2024-12-09 12:04:01.409397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.727 [2024-12-09 12:04:01.409403] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.727 [2024-12-09 12:04:01.421075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.727 [2024-12-09 12:04:01.421608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.727 [2024-12-09 12:04:01.421645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.727 [2024-12-09 12:04:01.421655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.727 [2024-12-09 12:04:01.421823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.727 [2024-12-09 12:04:01.421975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.727 [2024-12-09 12:04:01.421982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.727 [2024-12-09 12:04:01.421987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.728 [2024-12-09 12:04:01.421992] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.728 [2024-12-09 12:04:01.433799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.728 [2024-12-09 12:04:01.434375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.728 [2024-12-09 12:04:01.434405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.728 [2024-12-09 12:04:01.434414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.728 [2024-12-09 12:04:01.434580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.728 [2024-12-09 12:04:01.434740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.728 [2024-12-09 12:04:01.434748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.728 [2024-12-09 12:04:01.434753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.728 [2024-12-09 12:04:01.434759] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.728 [2024-12-09 12:04:01.446433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.728 [2024-12-09 12:04:01.446987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.728 [2024-12-09 12:04:01.447018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.728 [2024-12-09 12:04:01.447026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.728 [2024-12-09 12:04:01.447192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.728 [2024-12-09 12:04:01.447345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.728 [2024-12-09 12:04:01.447351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.728 [2024-12-09 12:04:01.447357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.728 [2024-12-09 12:04:01.447362] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.728 [2024-12-09 12:04:01.459033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.728 [2024-12-09 12:04:01.459527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.728 [2024-12-09 12:04:01.459542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.728 [2024-12-09 12:04:01.459547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.728 [2024-12-09 12:04:01.459704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.728 [2024-12-09 12:04:01.459855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.728 [2024-12-09 12:04:01.459860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.728 [2024-12-09 12:04:01.459865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.728 [2024-12-09 12:04:01.459870] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.728 [2024-12-09 12:04:01.471682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.728 [2024-12-09 12:04:01.472250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.728 [2024-12-09 12:04:01.472285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.728 [2024-12-09 12:04:01.472293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.728 [2024-12-09 12:04:01.472459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.728 [2024-12-09 12:04:01.472612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.728 [2024-12-09 12:04:01.472618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.728 [2024-12-09 12:04:01.472623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.728 [2024-12-09 12:04:01.472629] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.728 [2024-12-09 12:04:01.484317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.728 [2024-12-09 12:04:01.484933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.728 [2024-12-09 12:04:01.484964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.728 [2024-12-09 12:04:01.484973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.728 [2024-12-09 12:04:01.485139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.728 [2024-12-09 12:04:01.485292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.728 [2024-12-09 12:04:01.485298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.728 [2024-12-09 12:04:01.485304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.728 [2024-12-09 12:04:01.485309] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.728 [2024-12-09 12:04:01.496984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.728 [2024-12-09 12:04:01.497555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.728 [2024-12-09 12:04:01.497585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.728 [2024-12-09 12:04:01.497594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.728 [2024-12-09 12:04:01.497768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.728 [2024-12-09 12:04:01.497921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.728 [2024-12-09 12:04:01.497927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.728 [2024-12-09 12:04:01.497933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.728 [2024-12-09 12:04:01.497938] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.728 [2024-12-09 12:04:01.509618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.728 [2024-12-09 12:04:01.510084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.728 [2024-12-09 12:04:01.510100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.728 [2024-12-09 12:04:01.510106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.728 [2024-12-09 12:04:01.510259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.728 [2024-12-09 12:04:01.510409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.728 [2024-12-09 12:04:01.510415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.728 [2024-12-09 12:04:01.510420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.728 [2024-12-09 12:04:01.510425] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.728 [2024-12-09 12:04:01.522237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.728 [2024-12-09 12:04:01.522812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.728 [2024-12-09 12:04:01.522842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.728 [2024-12-09 12:04:01.522851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.728 [2024-12-09 12:04:01.523016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.728 [2024-12-09 12:04:01.523169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.728 [2024-12-09 12:04:01.523175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.728 [2024-12-09 12:04:01.523181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.728 [2024-12-09 12:04:01.523186] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.728 [2024-12-09 12:04:01.534848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.728 [2024-12-09 12:04:01.535435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.728 [2024-12-09 12:04:01.535466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.728 [2024-12-09 12:04:01.535475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.728 [2024-12-09 12:04:01.535648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.728 [2024-12-09 12:04:01.535802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.728 [2024-12-09 12:04:01.535808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.728 [2024-12-09 12:04:01.535814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.728 [2024-12-09 12:04:01.535819] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.728 [2024-12-09 12:04:01.547493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.728 [2024-12-09 12:04:01.548092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.728 [2024-12-09 12:04:01.548122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.728 [2024-12-09 12:04:01.548131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.728 [2024-12-09 12:04:01.548296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.728 [2024-12-09 12:04:01.548449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.728 [2024-12-09 12:04:01.548455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.729 [2024-12-09 12:04:01.548464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.729 [2024-12-09 12:04:01.548470] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.729 [2024-12-09 12:04:01.560152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.729 [2024-12-09 12:04:01.560660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.729 [2024-12-09 12:04:01.560678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.729 [2024-12-09 12:04:01.560684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.729 [2024-12-09 12:04:01.560835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.729 [2024-12-09 12:04:01.560986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.729 [2024-12-09 12:04:01.560991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.729 [2024-12-09 12:04:01.560997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.729 [2024-12-09 12:04:01.561002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.729 [2024-12-09 12:04:01.572765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.729 [2024-12-09 12:04:01.573321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.729 [2024-12-09 12:04:01.573351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.729 [2024-12-09 12:04:01.573360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.729 [2024-12-09 12:04:01.573526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.729 [2024-12-09 12:04:01.573686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.729 [2024-12-09 12:04:01.573694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.729 [2024-12-09 12:04:01.573699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.729 [2024-12-09 12:04:01.573705] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.729 [2024-12-09 12:04:01.585371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.729 [2024-12-09 12:04:01.585948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.729 [2024-12-09 12:04:01.585978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.729 [2024-12-09 12:04:01.585987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.729 [2024-12-09 12:04:01.586152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.729 [2024-12-09 12:04:01.586305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.729 [2024-12-09 12:04:01.586311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.729 [2024-12-09 12:04:01.586317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.729 [2024-12-09 12:04:01.586322] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.729 [2024-12-09 12:04:01.597993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.729 [2024-12-09 12:04:01.598602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.729 [2024-12-09 12:04:01.598632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.729 [2024-12-09 12:04:01.598647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.729 [2024-12-09 12:04:01.598814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.729 [2024-12-09 12:04:01.598966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.729 [2024-12-09 12:04:01.598973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.729 [2024-12-09 12:04:01.598978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.729 [2024-12-09 12:04:01.598985] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.991 [2024-12-09 12:04:01.610674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.991 [2024-12-09 12:04:01.611252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.991 [2024-12-09 12:04:01.611283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.991 [2024-12-09 12:04:01.611292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.991 [2024-12-09 12:04:01.611457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.991 [2024-12-09 12:04:01.611611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.991 [2024-12-09 12:04:01.611617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.991 [2024-12-09 12:04:01.611623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.991 [2024-12-09 12:04:01.611631] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.991 [2024-12-09 12:04:01.623314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.991 [2024-12-09 12:04:01.623783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.991 [2024-12-09 12:04:01.623799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.991 [2024-12-09 12:04:01.623804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.991 [2024-12-09 12:04:01.623955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.991 [2024-12-09 12:04:01.624104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.991 [2024-12-09 12:04:01.624110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.991 [2024-12-09 12:04:01.624115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.991 [2024-12-09 12:04:01.624120] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.991 [2024-12-09 12:04:01.635941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.991 [2024-12-09 12:04:01.636490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.991 [2024-12-09 12:04:01.636524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.991 [2024-12-09 12:04:01.636533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.991 [2024-12-09 12:04:01.636705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.991 [2024-12-09 12:04:01.636858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.991 [2024-12-09 12:04:01.636865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.991 [2024-12-09 12:04:01.636872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.991 [2024-12-09 12:04:01.636878] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.991 [2024-12-09 12:04:01.648570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.991 [2024-12-09 12:04:01.649032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.991 [2024-12-09 12:04:01.649047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.991 [2024-12-09 12:04:01.649053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.991 [2024-12-09 12:04:01.649204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.991 [2024-12-09 12:04:01.649354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.991 [2024-12-09 12:04:01.649360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.991 [2024-12-09 12:04:01.649365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.991 [2024-12-09 12:04:01.649370] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.991 [2024-12-09 12:04:01.661192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.991 [2024-12-09 12:04:01.661696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.991 [2024-12-09 12:04:01.661710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.991 [2024-12-09 12:04:01.661715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.991 [2024-12-09 12:04:01.661865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.991 [2024-12-09 12:04:01.662014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.991 [2024-12-09 12:04:01.662020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.991 [2024-12-09 12:04:01.662025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.991 [2024-12-09 12:04:01.662030] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.991 [2024-12-09 12:04:01.673845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.991 [2024-12-09 12:04:01.674323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.991 [2024-12-09 12:04:01.674336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.992 [2024-12-09 12:04:01.674342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.992 [2024-12-09 12:04:01.674494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.992 [2024-12-09 12:04:01.674648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.992 [2024-12-09 12:04:01.674655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.992 [2024-12-09 12:04:01.674660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.992 [2024-12-09 12:04:01.674664] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.992 [2024-12-09 12:04:01.686477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.992 [2024-12-09 12:04:01.686940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.992 [2024-12-09 12:04:01.686953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.992 [2024-12-09 12:04:01.686959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.992 [2024-12-09 12:04:01.687108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.992 [2024-12-09 12:04:01.687257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.992 [2024-12-09 12:04:01.687263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.992 [2024-12-09 12:04:01.687268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.992 [2024-12-09 12:04:01.687272] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.992 [2024-12-09 12:04:01.699090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.992 [2024-12-09 12:04:01.699576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.992 [2024-12-09 12:04:01.699589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.992 [2024-12-09 12:04:01.699594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.992 [2024-12-09 12:04:01.699749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.992 [2024-12-09 12:04:01.699899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.992 [2024-12-09 12:04:01.699905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.992 [2024-12-09 12:04:01.699910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.992 [2024-12-09 12:04:01.699917] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.992 [2024-12-09 12:04:01.711749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.992 [2024-12-09 12:04:01.712242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.992 [2024-12-09 12:04:01.712273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.992 [2024-12-09 12:04:01.712282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.992 [2024-12-09 12:04:01.712448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.992 [2024-12-09 12:04:01.712600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.992 [2024-12-09 12:04:01.712607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.992 [2024-12-09 12:04:01.712616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.992 [2024-12-09 12:04:01.712622] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.992 [2024-12-09 12:04:01.724446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.992 [2024-12-09 12:04:01.724849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.992 [2024-12-09 12:04:01.724865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.992 [2024-12-09 12:04:01.724871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.992 [2024-12-09 12:04:01.725021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.992 [2024-12-09 12:04:01.725171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.992 [2024-12-09 12:04:01.725177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.992 [2024-12-09 12:04:01.725181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.992 [2024-12-09 12:04:01.725186] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.992 [2024-12-09 12:04:01.737046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.992 [2024-12-09 12:04:01.737417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.992 [2024-12-09 12:04:01.737429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.992 [2024-12-09 12:04:01.737435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.992 [2024-12-09 12:04:01.737585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.992 [2024-12-09 12:04:01.737741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.992 [2024-12-09 12:04:01.737747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.992 [2024-12-09 12:04:01.737752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.992 [2024-12-09 12:04:01.737757] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.992 [2024-12-09 12:04:01.749729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.992 [2024-12-09 12:04:01.750283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.992 [2024-12-09 12:04:01.750314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.992 [2024-12-09 12:04:01.750323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.992 [2024-12-09 12:04:01.750488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.992 [2024-12-09 12:04:01.750648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.992 [2024-12-09 12:04:01.750656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.992 [2024-12-09 12:04:01.750661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.992 [2024-12-09 12:04:01.750667] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.992 [2024-12-09 12:04:01.762351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.992 [2024-12-09 12:04:01.762846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.992 [2024-12-09 12:04:01.762862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.992 [2024-12-09 12:04:01.762868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.992 [2024-12-09 12:04:01.763019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.992 [2024-12-09 12:04:01.763168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.992 [2024-12-09 12:04:01.763174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.992 [2024-12-09 12:04:01.763179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.992 [2024-12-09 12:04:01.763184] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.992 [2024-12-09 12:04:01.775007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.992 [2024-12-09 12:04:01.775491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.992 [2024-12-09 12:04:01.775504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.992 [2024-12-09 12:04:01.775509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.992 [2024-12-09 12:04:01.775664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.992 [2024-12-09 12:04:01.775815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.992 [2024-12-09 12:04:01.775820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.992 [2024-12-09 12:04:01.775825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.992 [2024-12-09 12:04:01.775830] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.992 [2024-12-09 12:04:01.787648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.992 [2024-12-09 12:04:01.788135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.992 [2024-12-09 12:04:01.788148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.992 [2024-12-09 12:04:01.788154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.992 [2024-12-09 12:04:01.788303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.992 [2024-12-09 12:04:01.788453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.992 [2024-12-09 12:04:01.788459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.992 [2024-12-09 12:04:01.788464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.992 [2024-12-09 12:04:01.788468] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.992 [2024-12-09 12:04:01.800287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.992 [2024-12-09 12:04:01.800660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.992 [2024-12-09 12:04:01.800677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.992 [2024-12-09 12:04:01.800682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.993 [2024-12-09 12:04:01.800832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.993 [2024-12-09 12:04:01.800981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.993 [2024-12-09 12:04:01.800987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.993 [2024-12-09 12:04:01.800992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.993 [2024-12-09 12:04:01.800996] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.993 [2024-12-09 12:04:01.812962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.993 [2024-12-09 12:04:01.813451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.993 [2024-12-09 12:04:01.813463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.993 [2024-12-09 12:04:01.813468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.993 [2024-12-09 12:04:01.813618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.993 [2024-12-09 12:04:01.813771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.993 [2024-12-09 12:04:01.813778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.993 [2024-12-09 12:04:01.813783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.993 [2024-12-09 12:04:01.813787] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.993 [2024-12-09 12:04:01.825600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.993 [2024-12-09 12:04:01.826097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.993 [2024-12-09 12:04:01.826110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.993 [2024-12-09 12:04:01.826115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.993 [2024-12-09 12:04:01.826264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.993 [2024-12-09 12:04:01.826413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.993 [2024-12-09 12:04:01.826419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.993 [2024-12-09 12:04:01.826424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.993 [2024-12-09 12:04:01.826429] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.993 [2024-12-09 12:04:01.838246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.993 [2024-12-09 12:04:01.838727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.993 [2024-12-09 12:04:01.838740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.993 [2024-12-09 12:04:01.838745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.993 [2024-12-09 12:04:01.838897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.993 [2024-12-09 12:04:01.839046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.993 [2024-12-09 12:04:01.839052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.993 [2024-12-09 12:04:01.839057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.993 [2024-12-09 12:04:01.839062] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.993 [2024-12-09 12:04:01.850891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.993 [2024-12-09 12:04:01.851461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.993 [2024-12-09 12:04:01.851492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.993 [2024-12-09 12:04:01.851501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.993 [2024-12-09 12:04:01.851674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.993 [2024-12-09 12:04:01.851828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.993 [2024-12-09 12:04:01.851835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.993 [2024-12-09 12:04:01.851840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.993 [2024-12-09 12:04:01.851845] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:53.993 [2024-12-09 12:04:01.863523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:53.993 [2024-12-09 12:04:01.863992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.993 [2024-12-09 12:04:01.864008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:53.993 [2024-12-09 12:04:01.864014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:53.993 [2024-12-09 12:04:01.864164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:53.993 [2024-12-09 12:04:01.864314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:53.993 [2024-12-09 12:04:01.864320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:53.993 [2024-12-09 12:04:01.864325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:53.993 [2024-12-09 12:04:01.864330] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.256 [2024-12-09 12:04:01.876161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.256 [2024-12-09 12:04:01.876622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.256 [2024-12-09 12:04:01.876636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.256 [2024-12-09 12:04:01.876646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.256 [2024-12-09 12:04:01.876797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.256 [2024-12-09 12:04:01.876947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.256 [2024-12-09 12:04:01.876953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.256 [2024-12-09 12:04:01.876961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.256 [2024-12-09 12:04:01.876966] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.256 [2024-12-09 12:04:01.888792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.256 [2024-12-09 12:04:01.889266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.256 [2024-12-09 12:04:01.889279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.256 [2024-12-09 12:04:01.889284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.256 [2024-12-09 12:04:01.889433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.256 [2024-12-09 12:04:01.889583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.256 [2024-12-09 12:04:01.889589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.256 [2024-12-09 12:04:01.889595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.256 [2024-12-09 12:04:01.889600] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.256 [2024-12-09 12:04:01.901420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.256 [2024-12-09 12:04:01.901893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.256 [2024-12-09 12:04:01.901906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.256 [2024-12-09 12:04:01.901912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.256 [2024-12-09 12:04:01.902062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.256 [2024-12-09 12:04:01.902212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.256 [2024-12-09 12:04:01.902217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.256 [2024-12-09 12:04:01.902223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.256 [2024-12-09 12:04:01.902228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.256 [2024-12-09 12:04:01.914053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.256 [2024-12-09 12:04:01.914540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.256 [2024-12-09 12:04:01.914552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.256 [2024-12-09 12:04:01.914557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.256 [2024-12-09 12:04:01.914712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.256 [2024-12-09 12:04:01.914863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.256 [2024-12-09 12:04:01.914869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.256 [2024-12-09 12:04:01.914874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.256 [2024-12-09 12:04:01.914879] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.256 [2024-12-09 12:04:01.926788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.256 [2024-12-09 12:04:01.927358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.256 [2024-12-09 12:04:01.927388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.256 [2024-12-09 12:04:01.927397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.256 [2024-12-09 12:04:01.927562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.256 [2024-12-09 12:04:01.927724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.256 [2024-12-09 12:04:01.927731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.256 [2024-12-09 12:04:01.927737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.256 [2024-12-09 12:04:01.927742] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.256 [2024-12-09 12:04:01.939428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.256 [2024-12-09 12:04:01.939890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.256 [2024-12-09 12:04:01.939906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.256 [2024-12-09 12:04:01.939912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.256 [2024-12-09 12:04:01.940062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.256 [2024-12-09 12:04:01.940212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.256 [2024-12-09 12:04:01.940218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.256 [2024-12-09 12:04:01.940223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.256 [2024-12-09 12:04:01.940227] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.256 [2024-12-09 12:04:01.952084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.256 [2024-12-09 12:04:01.952574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.256 [2024-12-09 12:04:01.952587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.256 [2024-12-09 12:04:01.952593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.256 [2024-12-09 12:04:01.952747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.256 [2024-12-09 12:04:01.952897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.256 [2024-12-09 12:04:01.952903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.256 [2024-12-09 12:04:01.952908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.256 [2024-12-09 12:04:01.952912] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.256 [2024-12-09 12:04:01.964738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.256 [2024-12-09 12:04:01.965287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.256 [2024-12-09 12:04:01.965303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.256 [2024-12-09 12:04:01.965309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.256 [2024-12-09 12:04:01.965459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.256 [2024-12-09 12:04:01.965608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.256 [2024-12-09 12:04:01.965614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.256 [2024-12-09 12:04:01.965618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.256 [2024-12-09 12:04:01.965623] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.256 [2024-12-09 12:04:01.977448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.256 [2024-12-09 12:04:01.977875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.256 [2024-12-09 12:04:01.977888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.256 [2024-12-09 12:04:01.977893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.256 [2024-12-09 12:04:01.978043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.256 [2024-12-09 12:04:01.978193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.256 [2024-12-09 12:04:01.978199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.256 [2024-12-09 12:04:01.978203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.256 [2024-12-09 12:04:01.978208] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.256 [2024-12-09 12:04:01.990169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.256 [2024-12-09 12:04:01.990654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.256 [2024-12-09 12:04:01.990668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.256 [2024-12-09 12:04:01.990673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.256 [2024-12-09 12:04:01.990823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.256 [2024-12-09 12:04:01.990972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.257 [2024-12-09 12:04:01.990978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.257 [2024-12-09 12:04:01.990983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.257 [2024-12-09 12:04:01.990987] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.257 [2024-12-09 12:04:02.002808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.257 [2024-12-09 12:04:02.003300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.257 [2024-12-09 12:04:02.003312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.257 [2024-12-09 12:04:02.003317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.257 [2024-12-09 12:04:02.003470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.257 [2024-12-09 12:04:02.003619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.257 [2024-12-09 12:04:02.003625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.257 [2024-12-09 12:04:02.003629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.257 [2024-12-09 12:04:02.003634] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.257 [2024-12-09 12:04:02.015462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.257 [2024-12-09 12:04:02.015947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.257 [2024-12-09 12:04:02.015960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.257 [2024-12-09 12:04:02.015966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.257 [2024-12-09 12:04:02.016115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.257 [2024-12-09 12:04:02.016264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.257 [2024-12-09 12:04:02.016270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.257 [2024-12-09 12:04:02.016275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.257 [2024-12-09 12:04:02.016280] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.257 [2024-12-09 12:04:02.028102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.257 [2024-12-09 12:04:02.028588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.257 [2024-12-09 12:04:02.028600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.257 [2024-12-09 12:04:02.028605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.257 [2024-12-09 12:04:02.028759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.257 [2024-12-09 12:04:02.028908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.257 [2024-12-09 12:04:02.028914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.257 [2024-12-09 12:04:02.028919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.257 [2024-12-09 12:04:02.028924] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.257 [2024-12-09 12:04:02.040746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.257 [2024-12-09 12:04:02.041327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.257 [2024-12-09 12:04:02.041360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.257 [2024-12-09 12:04:02.041368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.257 [2024-12-09 12:04:02.041535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.257 [2024-12-09 12:04:02.041697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.257 [2024-12-09 12:04:02.041705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.257 [2024-12-09 12:04:02.041714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.257 [2024-12-09 12:04:02.041720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.257 [2024-12-09 12:04:02.053407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.257 [2024-12-09 12:04:02.053882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.257 [2024-12-09 12:04:02.053899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.257 [2024-12-09 12:04:02.053905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.257 [2024-12-09 12:04:02.054055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.257 [2024-12-09 12:04:02.054205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.257 [2024-12-09 12:04:02.054211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.257 [2024-12-09 12:04:02.054216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.257 [2024-12-09 12:04:02.054221] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.257 [2024-12-09 12:04:02.066038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.257 [2024-12-09 12:04:02.066489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.257 [2024-12-09 12:04:02.066502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.257 [2024-12-09 12:04:02.066507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.257 [2024-12-09 12:04:02.066660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.257 [2024-12-09 12:04:02.066810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.257 [2024-12-09 12:04:02.066816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.257 [2024-12-09 12:04:02.066821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.257 [2024-12-09 12:04:02.066826] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.257 [2024-12-09 12:04:02.078646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.257 [2024-12-09 12:04:02.079126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.257 [2024-12-09 12:04:02.079156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.257 [2024-12-09 12:04:02.079165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.257 [2024-12-09 12:04:02.079331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.257 [2024-12-09 12:04:02.079484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.257 [2024-12-09 12:04:02.079490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.257 [2024-12-09 12:04:02.079495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.257 [2024-12-09 12:04:02.079501] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.257 [2024-12-09 12:04:02.091335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.257 [2024-12-09 12:04:02.091791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.257 [2024-12-09 12:04:02.091808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.257 [2024-12-09 12:04:02.091813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.257 [2024-12-09 12:04:02.091964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.257 [2024-12-09 12:04:02.092114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.257 [2024-12-09 12:04:02.092119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.257 [2024-12-09 12:04:02.092124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.257 [2024-12-09 12:04:02.092129] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.257 [2024-12-09 12:04:02.103955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.257 [2024-12-09 12:04:02.104402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.257 [2024-12-09 12:04:02.104415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.257 [2024-12-09 12:04:02.104420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.257 [2024-12-09 12:04:02.104570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.257 [2024-12-09 12:04:02.104725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.257 [2024-12-09 12:04:02.104731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.257 [2024-12-09 12:04:02.104736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.257 [2024-12-09 12:04:02.104740] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.257 [2024-12-09 12:04:02.116575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.257 [2024-12-09 12:04:02.117068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.258 [2024-12-09 12:04:02.117081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.258 [2024-12-09 12:04:02.117087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.258 [2024-12-09 12:04:02.117236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.258 [2024-12-09 12:04:02.117386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.258 [2024-12-09 12:04:02.117391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.258 [2024-12-09 12:04:02.117396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.258 [2024-12-09 12:04:02.117401] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.258 [2024-12-09 12:04:02.129221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.258 [2024-12-09 12:04:02.129664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.258 [2024-12-09 12:04:02.129680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.258 [2024-12-09 12:04:02.129686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.258 [2024-12-09 12:04:02.129836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.258 [2024-12-09 12:04:02.129985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.258 [2024-12-09 12:04:02.129991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.258 [2024-12-09 12:04:02.129996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.258 [2024-12-09 12:04:02.130000] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.520 [2024-12-09 12:04:02.141843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.520 [2024-12-09 12:04:02.142187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.520 [2024-12-09 12:04:02.142200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.520 [2024-12-09 12:04:02.142206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.520 [2024-12-09 12:04:02.142356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.520 [2024-12-09 12:04:02.142505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.520 [2024-12-09 12:04:02.142511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.520 [2024-12-09 12:04:02.142516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.520 [2024-12-09 12:04:02.142522] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.520 [2024-12-09 12:04:02.154492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.520 [2024-12-09 12:04:02.154969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.520 [2024-12-09 12:04:02.154982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.520 [2024-12-09 12:04:02.154988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.520 [2024-12-09 12:04:02.155138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.520 [2024-12-09 12:04:02.155288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.520 [2024-12-09 12:04:02.155294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.520 [2024-12-09 12:04:02.155299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.520 [2024-12-09 12:04:02.155304] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.520 [2024-12-09 12:04:02.167122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.520 [2024-12-09 12:04:02.167588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.520 [2024-12-09 12:04:02.167600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.520 [2024-12-09 12:04:02.167605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.520 [2024-12-09 12:04:02.167763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.520 [2024-12-09 12:04:02.167913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.520 [2024-12-09 12:04:02.167919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.520 [2024-12-09 12:04:02.167924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.520 [2024-12-09 12:04:02.167928] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.520 [2024-12-09 12:04:02.179748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.520 [2024-12-09 12:04:02.180218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.520 [2024-12-09 12:04:02.180231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.520 [2024-12-09 12:04:02.180236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.520 [2024-12-09 12:04:02.180385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.520 [2024-12-09 12:04:02.180535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.520 [2024-12-09 12:04:02.180540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.520 [2024-12-09 12:04:02.180545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.520 [2024-12-09 12:04:02.180550] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.520 [2024-12-09 12:04:02.192365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.520 [2024-12-09 12:04:02.192851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.521 [2024-12-09 12:04:02.192864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.521 [2024-12-09 12:04:02.192869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.521 [2024-12-09 12:04:02.193018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.521 [2024-12-09 12:04:02.193168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.521 [2024-12-09 12:04:02.193174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.521 [2024-12-09 12:04:02.193179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.521 [2024-12-09 12:04:02.193183] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.521 [2024-12-09 12:04:02.205048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.521 [2024-12-09 12:04:02.205538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.521 [2024-12-09 12:04:02.205551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.521 [2024-12-09 12:04:02.205557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.521 [2024-12-09 12:04:02.205710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.521 [2024-12-09 12:04:02.205861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.521 [2024-12-09 12:04:02.205867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.521 [2024-12-09 12:04:02.205875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.521 [2024-12-09 12:04:02.205879] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.521 [2024-12-09 12:04:02.217705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.521 [2024-12-09 12:04:02.218199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.521 [2024-12-09 12:04:02.218211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.521 [2024-12-09 12:04:02.218216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.521 [2024-12-09 12:04:02.218366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.521 [2024-12-09 12:04:02.218517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.521 [2024-12-09 12:04:02.218522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.521 [2024-12-09 12:04:02.218527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.521 [2024-12-09 12:04:02.218532] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.521 [2024-12-09 12:04:02.230353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.521 [2024-12-09 12:04:02.230914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.521 [2024-12-09 12:04:02.230945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.521 [2024-12-09 12:04:02.230954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.521 [2024-12-09 12:04:02.231119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.521 [2024-12-09 12:04:02.231272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.521 [2024-12-09 12:04:02.231279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.521 [2024-12-09 12:04:02.231284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.521 [2024-12-09 12:04:02.231290] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.521 [2024-12-09 12:04:02.242983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.521 [2024-12-09 12:04:02.243465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.521 [2024-12-09 12:04:02.243480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.521 [2024-12-09 12:04:02.243486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.521 [2024-12-09 12:04:02.243640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.521 [2024-12-09 12:04:02.243792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.521 [2024-12-09 12:04:02.243798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.521 [2024-12-09 12:04:02.243803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.521 [2024-12-09 12:04:02.243808] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.521 [2024-12-09 12:04:02.255641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.521 [2024-12-09 12:04:02.256001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.521 [2024-12-09 12:04:02.256015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.521 [2024-12-09 12:04:02.256020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.521 [2024-12-09 12:04:02.256170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.521 [2024-12-09 12:04:02.256319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.521 [2024-12-09 12:04:02.256325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.521 [2024-12-09 12:04:02.256330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.521 [2024-12-09 12:04:02.256335] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.521 [2024-12-09 12:04:02.268325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.521 [2024-12-09 12:04:02.268785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.521 [2024-12-09 12:04:02.268799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.521 [2024-12-09 12:04:02.268804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.521 [2024-12-09 12:04:02.268954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.521 [2024-12-09 12:04:02.269103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.521 [2024-12-09 12:04:02.269109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.521 [2024-12-09 12:04:02.269113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.521 [2024-12-09 12:04:02.269118] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.521 [2024-12-09 12:04:02.280934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.521 [2024-12-09 12:04:02.281304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.521 [2024-12-09 12:04:02.281318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.521 [2024-12-09 12:04:02.281324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.521 [2024-12-09 12:04:02.281473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.521 [2024-12-09 12:04:02.281623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.521 [2024-12-09 12:04:02.281629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.521 [2024-12-09 12:04:02.281634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.521 [2024-12-09 12:04:02.281643] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.521 [2024-12-09 12:04:02.293603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.521 [2024-12-09 12:04:02.294093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.521 [2024-12-09 12:04:02.294109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.521 [2024-12-09 12:04:02.294115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.521 [2024-12-09 12:04:02.294264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.521 [2024-12-09 12:04:02.294414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.521 [2024-12-09 12:04:02.294419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.521 [2024-12-09 12:04:02.294424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.521 [2024-12-09 12:04:02.294429] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.521 [2024-12-09 12:04:02.306252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.521 [2024-12-09 12:04:02.306850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.521 [2024-12-09 12:04:02.306880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.521 [2024-12-09 12:04:02.306889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.521 [2024-12-09 12:04:02.307054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.521 [2024-12-09 12:04:02.307207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.521 [2024-12-09 12:04:02.307213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.521 [2024-12-09 12:04:02.307219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.521 [2024-12-09 12:04:02.307224] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.521 [2024-12-09 12:04:02.318919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.521 [2024-12-09 12:04:02.319424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.522 [2024-12-09 12:04:02.319438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.522 [2024-12-09 12:04:02.319444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.522 [2024-12-09 12:04:02.319594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.522 [2024-12-09 12:04:02.319749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.522 [2024-12-09 12:04:02.319755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.522 [2024-12-09 12:04:02.319760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.522 [2024-12-09 12:04:02.319765] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.522 [2024-12-09 12:04:02.331582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.522 [2024-12-09 12:04:02.332068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.522 [2024-12-09 12:04:02.332081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.522 [2024-12-09 12:04:02.332087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.522 [2024-12-09 12:04:02.332236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.522 [2024-12-09 12:04:02.332390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.522 [2024-12-09 12:04:02.332395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.522 [2024-12-09 12:04:02.332400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.522 [2024-12-09 12:04:02.332405] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.522 5167.60 IOPS, 20.19 MiB/s [2024-12-09T11:04:02.408Z] [2024-12-09 12:04:02.345369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.522 [2024-12-09 12:04:02.345865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.522 [2024-12-09 12:04:02.345879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.522 [2024-12-09 12:04:02.345884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.522 [2024-12-09 12:04:02.346034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.522 [2024-12-09 12:04:02.346185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.522 [2024-12-09 12:04:02.346190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.522 [2024-12-09 12:04:02.346195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.522 [2024-12-09 12:04:02.346200] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.522 [2024-12-09 12:04:02.358023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.522 [2024-12-09 12:04:02.358595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.522 [2024-12-09 12:04:02.358610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.522 [2024-12-09 12:04:02.358616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.522 [2024-12-09 12:04:02.358771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.522 [2024-12-09 12:04:02.358921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.522 [2024-12-09 12:04:02.358927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.522 [2024-12-09 12:04:02.358932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.522 [2024-12-09 12:04:02.358936] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.522 [2024-12-09 12:04:02.370607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.522 [2024-12-09 12:04:02.371145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.522 [2024-12-09 12:04:02.371176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.522 [2024-12-09 12:04:02.371185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.522 [2024-12-09 12:04:02.371350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.522 [2024-12-09 12:04:02.371503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.522 [2024-12-09 12:04:02.371516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.522 [2024-12-09 12:04:02.371521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.522 [2024-12-09 12:04:02.371528] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.522 [2024-12-09 12:04:02.383210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.522 [2024-12-09 12:04:02.383781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.522 [2024-12-09 12:04:02.383811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.522 [2024-12-09 12:04:02.383821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.522 [2024-12-09 12:04:02.383987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.522 [2024-12-09 12:04:02.384139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.522 [2024-12-09 12:04:02.384146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.522 [2024-12-09 12:04:02.384151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.522 [2024-12-09 12:04:02.384157] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.522 [2024-12-09 12:04:02.395833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.522 [2024-12-09 12:04:02.396441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.522 [2024-12-09 12:04:02.396472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.522 [2024-12-09 12:04:02.396481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.522 [2024-12-09 12:04:02.396655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.522 [2024-12-09 12:04:02.396809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.522 [2024-12-09 12:04:02.396816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.522 [2024-12-09 12:04:02.396823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.522 [2024-12-09 12:04:02.396829] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.784 [2024-12-09 12:04:02.408516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.784 [2024-12-09 12:04:02.409121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-12-09 12:04:02.409151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.784 [2024-12-09 12:04:02.409160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.784 [2024-12-09 12:04:02.409326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.784 [2024-12-09 12:04:02.409479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.784 [2024-12-09 12:04:02.409486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.784 [2024-12-09 12:04:02.409491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.784 [2024-12-09 12:04:02.409496] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.784 [2024-12-09 12:04:02.421182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.784 [2024-12-09 12:04:02.421673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-12-09 12:04:02.421689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.784 [2024-12-09 12:04:02.421695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.784 [2024-12-09 12:04:02.421845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.784 [2024-12-09 12:04:02.421995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.784 [2024-12-09 12:04:02.422000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.784 [2024-12-09 12:04:02.422005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.784 [2024-12-09 12:04:02.422010] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.784 [2024-12-09 12:04:02.433812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.784 [2024-12-09 12:04:02.434349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.784 [2024-12-09 12:04:02.434379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.784 [2024-12-09 12:04:02.434388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.784 [2024-12-09 12:04:02.434553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.785 [2024-12-09 12:04:02.434713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.785 [2024-12-09 12:04:02.434720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.785 [2024-12-09 12:04:02.434725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.785 [2024-12-09 12:04:02.434731] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.785 [2024-12-09 12:04:02.446534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.785 [2024-12-09 12:04:02.447084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-12-09 12:04:02.447115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.785 [2024-12-09 12:04:02.447124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.785 [2024-12-09 12:04:02.447289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.785 [2024-12-09 12:04:02.447442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.785 [2024-12-09 12:04:02.447448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.785 [2024-12-09 12:04:02.447453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.785 [2024-12-09 12:04:02.447458] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.785 [2024-12-09 12:04:02.459134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.785 [2024-12-09 12:04:02.459631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-12-09 12:04:02.459653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.785 [2024-12-09 12:04:02.459659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.785 [2024-12-09 12:04:02.459810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.785 [2024-12-09 12:04:02.459960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.785 [2024-12-09 12:04:02.459966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.785 [2024-12-09 12:04:02.459971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.785 [2024-12-09 12:04:02.459975] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.785 [2024-12-09 12:04:02.471775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.785 [2024-12-09 12:04:02.472345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-12-09 12:04:02.472375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.785 [2024-12-09 12:04:02.472384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.785 [2024-12-09 12:04:02.472550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.785 [2024-12-09 12:04:02.472709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.785 [2024-12-09 12:04:02.472717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.785 [2024-12-09 12:04:02.472722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.785 [2024-12-09 12:04:02.472727] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.785 [2024-12-09 12:04:02.484407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.785 [2024-12-09 12:04:02.484822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-12-09 12:04:02.484838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.785 [2024-12-09 12:04:02.484843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.785 [2024-12-09 12:04:02.484993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.785 [2024-12-09 12:04:02.485143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.785 [2024-12-09 12:04:02.485150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.785 [2024-12-09 12:04:02.485156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.785 [2024-12-09 12:04:02.485161] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.785 [2024-12-09 12:04:02.497117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.785 [2024-12-09 12:04:02.497605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-12-09 12:04:02.497619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.785 [2024-12-09 12:04:02.497624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.785 [2024-12-09 12:04:02.497782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.785 [2024-12-09 12:04:02.497932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.785 [2024-12-09 12:04:02.497938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.785 [2024-12-09 12:04:02.497943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.785 [2024-12-09 12:04:02.497947] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.785 [2024-12-09 12:04:02.509787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.785 [2024-12-09 12:04:02.510265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-12-09 12:04:02.510278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.785 [2024-12-09 12:04:02.510284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.785 [2024-12-09 12:04:02.510435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.785 [2024-12-09 12:04:02.510584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.785 [2024-12-09 12:04:02.510590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.785 [2024-12-09 12:04:02.510595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.785 [2024-12-09 12:04:02.510600] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.785 [2024-12-09 12:04:02.522419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.785 [2024-12-09 12:04:02.522921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-12-09 12:04:02.522953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.785 [2024-12-09 12:04:02.522962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.785 [2024-12-09 12:04:02.523127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.785 [2024-12-09 12:04:02.523280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.785 [2024-12-09 12:04:02.523286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.785 [2024-12-09 12:04:02.523292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.785 [2024-12-09 12:04:02.523297] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.785 [2024-12-09 12:04:02.535127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.785 [2024-12-09 12:04:02.535606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-12-09 12:04:02.535621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.785 [2024-12-09 12:04:02.535627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.785 [2024-12-09 12:04:02.535781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.785 [2024-12-09 12:04:02.535931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.785 [2024-12-09 12:04:02.535937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.785 [2024-12-09 12:04:02.535946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.785 [2024-12-09 12:04:02.535951] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.785 [2024-12-09 12:04:02.547769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.785 [2024-12-09 12:04:02.548267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-12-09 12:04:02.548280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.785 [2024-12-09 12:04:02.548285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.785 [2024-12-09 12:04:02.548435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.785 [2024-12-09 12:04:02.548584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.785 [2024-12-09 12:04:02.548590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.785 [2024-12-09 12:04:02.548595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.785 [2024-12-09 12:04:02.548599] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.785 [2024-12-09 12:04:02.560416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.785 [2024-12-09 12:04:02.560914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.785 [2024-12-09 12:04:02.560945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.785 [2024-12-09 12:04:02.560954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.785 [2024-12-09 12:04:02.561122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.785 [2024-12-09 12:04:02.561275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.786 [2024-12-09 12:04:02.561281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.786 [2024-12-09 12:04:02.561287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.786 [2024-12-09 12:04:02.561292] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.786 [2024-12-09 12:04:02.573125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.786 [2024-12-09 12:04:02.573683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-12-09 12:04:02.573714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.786 [2024-12-09 12:04:02.573723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.786 [2024-12-09 12:04:02.573888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.786 [2024-12-09 12:04:02.574041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.786 [2024-12-09 12:04:02.574048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.786 [2024-12-09 12:04:02.574053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.786 [2024-12-09 12:04:02.574058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.786 [2024-12-09 12:04:02.585734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.786 [2024-12-09 12:04:02.586311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-12-09 12:04:02.586342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.786 [2024-12-09 12:04:02.586351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.786 [2024-12-09 12:04:02.586516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.786 [2024-12-09 12:04:02.586676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.786 [2024-12-09 12:04:02.586683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.786 [2024-12-09 12:04:02.586688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.786 [2024-12-09 12:04:02.586694] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.786 [2024-12-09 12:04:02.598358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.786 [2024-12-09 12:04:02.598939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-12-09 12:04:02.598969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.786 [2024-12-09 12:04:02.598978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.786 [2024-12-09 12:04:02.599144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.786 [2024-12-09 12:04:02.599296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.786 [2024-12-09 12:04:02.599303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.786 [2024-12-09 12:04:02.599308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.786 [2024-12-09 12:04:02.599314] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.786 [2024-12-09 12:04:02.610991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.786 [2024-12-09 12:04:02.611440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-12-09 12:04:02.611470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.786 [2024-12-09 12:04:02.611479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.786 [2024-12-09 12:04:02.611654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.786 [2024-12-09 12:04:02.611807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.786 [2024-12-09 12:04:02.611814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.786 [2024-12-09 12:04:02.611819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.786 [2024-12-09 12:04:02.611825] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.786 [2024-12-09 12:04:02.623640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.786 [2024-12-09 12:04:02.624213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-12-09 12:04:02.624247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.786 [2024-12-09 12:04:02.624255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.786 [2024-12-09 12:04:02.624421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.786 [2024-12-09 12:04:02.624574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.786 [2024-12-09 12:04:02.624580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.786 [2024-12-09 12:04:02.624585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.786 [2024-12-09 12:04:02.624591] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.786 [2024-12-09 12:04:02.636264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.786 [2024-12-09 12:04:02.636758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-12-09 12:04:02.636788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.786 [2024-12-09 12:04:02.636797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.786 [2024-12-09 12:04:02.636965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.786 [2024-12-09 12:04:02.637118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.786 [2024-12-09 12:04:02.637124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.786 [2024-12-09 12:04:02.637129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.786 [2024-12-09 12:04:02.637135] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.786 [2024-12-09 12:04:02.648953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.786 [2024-12-09 12:04:02.649543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-12-09 12:04:02.649573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.786 [2024-12-09 12:04:02.649583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.786 [2024-12-09 12:04:02.649767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.786 [2024-12-09 12:04:02.649922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.786 [2024-12-09 12:04:02.649928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.786 [2024-12-09 12:04:02.649934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.786 [2024-12-09 12:04:02.649941] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:54.786 [2024-12-09 12:04:02.661602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:54.786 [2024-12-09 12:04:02.662147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.786 [2024-12-09 12:04:02.662178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:54.786 [2024-12-09 12:04:02.662187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:54.786 [2024-12-09 12:04:02.662356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:54.786 [2024-12-09 12:04:02.662510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:54.786 [2024-12-09 12:04:02.662516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:54.786 [2024-12-09 12:04:02.662522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:54.786 [2024-12-09 12:04:02.662528] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.049 [2024-12-09 12:04:02.674200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.049 [2024-12-09 12:04:02.674660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.049 [2024-12-09 12:04:02.674677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.049 [2024-12-09 12:04:02.674683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.049 [2024-12-09 12:04:02.674835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.049 [2024-12-09 12:04:02.674986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.049 [2024-12-09 12:04:02.674992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.049 [2024-12-09 12:04:02.674997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.049 [2024-12-09 12:04:02.675002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.049 [2024-12-09 12:04:02.686803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.049 [2024-12-09 12:04:02.687385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.049 [2024-12-09 12:04:02.687415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.049 [2024-12-09 12:04:02.687424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.049 [2024-12-09 12:04:02.687590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.049 [2024-12-09 12:04:02.687750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.049 [2024-12-09 12:04:02.687757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.049 [2024-12-09 12:04:02.687762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.049 [2024-12-09 12:04:02.687768] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.049 [2024-12-09 12:04:02.699425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.049 [2024-12-09 12:04:02.699910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.049 [2024-12-09 12:04:02.699941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.049 [2024-12-09 12:04:02.699950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.049 [2024-12-09 12:04:02.700115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.049 [2024-12-09 12:04:02.700268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.049 [2024-12-09 12:04:02.700274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.049 [2024-12-09 12:04:02.700283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.049 [2024-12-09 12:04:02.700288] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.049 [2024-12-09 12:04:02.712117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.049 [2024-12-09 12:04:02.712714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.049 [2024-12-09 12:04:02.712745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.049 [2024-12-09 12:04:02.712753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.049 [2024-12-09 12:04:02.712921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.049 [2024-12-09 12:04:02.713074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.049 [2024-12-09 12:04:02.713081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.049 [2024-12-09 12:04:02.713086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.049 [2024-12-09 12:04:02.713092] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.049 [2024-12-09 12:04:02.724761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.049 [2024-12-09 12:04:02.725331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.049 [2024-12-09 12:04:02.725361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.050 [2024-12-09 12:04:02.725370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.050 [2024-12-09 12:04:02.725535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.050 [2024-12-09 12:04:02.725695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.050 [2024-12-09 12:04:02.725702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.050 [2024-12-09 12:04:02.725708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.050 [2024-12-09 12:04:02.725714] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.050 [2024-12-09 12:04:02.737369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.050 [2024-12-09 12:04:02.737946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.050 [2024-12-09 12:04:02.737976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.050 [2024-12-09 12:04:02.737985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.050 [2024-12-09 12:04:02.738151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.050 [2024-12-09 12:04:02.738303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.050 [2024-12-09 12:04:02.738309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.050 [2024-12-09 12:04:02.738315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.050 [2024-12-09 12:04:02.738320] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.050 [2024-12-09 12:04:02.749999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.050 [2024-12-09 12:04:02.750483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.050 [2024-12-09 12:04:02.750498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.050 [2024-12-09 12:04:02.750503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.050 [2024-12-09 12:04:02.750658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.050 [2024-12-09 12:04:02.750809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.050 [2024-12-09 12:04:02.750815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.050 [2024-12-09 12:04:02.750820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.050 [2024-12-09 12:04:02.750825] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.050 [2024-12-09 12:04:02.762624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.050 [2024-12-09 12:04:02.763083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.050 [2024-12-09 12:04:02.763096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.050 [2024-12-09 12:04:02.763101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.050 [2024-12-09 12:04:02.763251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.050 [2024-12-09 12:04:02.763400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.050 [2024-12-09 12:04:02.763406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.050 [2024-12-09 12:04:02.763411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.050 [2024-12-09 12:04:02.763415] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.050 [2024-12-09 12:04:02.775208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.050 [2024-12-09 12:04:02.775738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.050 [2024-12-09 12:04:02.775769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.050 [2024-12-09 12:04:02.775777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.050 [2024-12-09 12:04:02.775945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.050 [2024-12-09 12:04:02.776098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.050 [2024-12-09 12:04:02.776104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.050 [2024-12-09 12:04:02.776110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.050 [2024-12-09 12:04:02.776115] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.050 [2024-12-09 12:04:02.787820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.050 [2024-12-09 12:04:02.788391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.050 [2024-12-09 12:04:02.788425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.050 [2024-12-09 12:04:02.788433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.050 [2024-12-09 12:04:02.788599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.050 [2024-12-09 12:04:02.788759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.050 [2024-12-09 12:04:02.788766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.050 [2024-12-09 12:04:02.788772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.050 [2024-12-09 12:04:02.788777] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.050 [2024-12-09 12:04:02.800446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.050 [2024-12-09 12:04:02.801008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.050 [2024-12-09 12:04:02.801038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.050 [2024-12-09 12:04:02.801048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.050 [2024-12-09 12:04:02.801213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.050 [2024-12-09 12:04:02.801365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.050 [2024-12-09 12:04:02.801372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.050 [2024-12-09 12:04:02.801377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.050 [2024-12-09 12:04:02.801383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.050 [2024-12-09 12:04:02.813059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.050 [2024-12-09 12:04:02.813553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.050 [2024-12-09 12:04:02.813568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.050 [2024-12-09 12:04:02.813573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.050 [2024-12-09 12:04:02.813728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.050 [2024-12-09 12:04:02.813878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.050 [2024-12-09 12:04:02.813884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.050 [2024-12-09 12:04:02.813889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.050 [2024-12-09 12:04:02.813894] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.050 [2024-12-09 12:04:02.825691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.050 [2024-12-09 12:04:02.826224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.050 [2024-12-09 12:04:02.826254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.050 [2024-12-09 12:04:02.826263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.050 [2024-12-09 12:04:02.826432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.050 [2024-12-09 12:04:02.826585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.050 [2024-12-09 12:04:02.826591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.050 [2024-12-09 12:04:02.826597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.050 [2024-12-09 12:04:02.826602] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.050 [2024-12-09 12:04:02.838415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.050 [2024-12-09 12:04:02.838920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.050 [2024-12-09 12:04:02.838950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.050 [2024-12-09 12:04:02.838960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.050 [2024-12-09 12:04:02.839125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.050 [2024-12-09 12:04:02.839278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.050 [2024-12-09 12:04:02.839284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.050 [2024-12-09 12:04:02.839290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.050 [2024-12-09 12:04:02.839296] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.050 [2024-12-09 12:04:02.851109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.050 [2024-12-09 12:04:02.851682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.050 [2024-12-09 12:04:02.851712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.050 [2024-12-09 12:04:02.851721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.050 [2024-12-09 12:04:02.851886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.051 [2024-12-09 12:04:02.852039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.051 [2024-12-09 12:04:02.852045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.051 [2024-12-09 12:04:02.852050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.051 [2024-12-09 12:04:02.852056] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 226736 Killed "${NVMF_APP[@]}" "$@" 00:28:55.051 12:04:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:55.051 12:04:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:55.051 12:04:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:55.051 12:04:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:55.051 12:04:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:55.051 [2024-12-09 12:04:02.863727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.051 [2024-12-09 12:04:02.864280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.051 [2024-12-09 12:04:02.864311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.051 [2024-12-09 12:04:02.864323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.051 [2024-12-09 12:04:02.864489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.051 [2024-12-09 12:04:02.864648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.051 [2024-12-09 12:04:02.864655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.051 [2024-12-09 12:04:02.864660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.051 [2024-12-09 12:04:02.864666] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.051 12:04:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=228455 00:28:55.051 12:04:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 228455 00:28:55.051 12:04:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:55.051 12:04:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 228455 ']' 00:28:55.051 12:04:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.051 12:04:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:55.051 12:04:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.051 12:04:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:55.051 12:04:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:55.051 [2024-12-09 12:04:02.876334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.051 [2024-12-09 12:04:02.876876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.051 [2024-12-09 12:04:02.876906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.051 [2024-12-09 12:04:02.876915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.051 [2024-12-09 12:04:02.877081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.051 [2024-12-09 12:04:02.877234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.051 [2024-12-09 12:04:02.877242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.051 [2024-12-09 12:04:02.877248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.051 [2024-12-09 12:04:02.877254] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.051 [2024-12-09 12:04:02.889066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.051 [2024-12-09 12:04:02.889549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.051 [2024-12-09 12:04:02.889565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.051 [2024-12-09 12:04:02.889570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.051 [2024-12-09 12:04:02.889725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.051 [2024-12-09 12:04:02.889875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.051 [2024-12-09 12:04:02.889885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.051 [2024-12-09 12:04:02.889891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.051 [2024-12-09 12:04:02.889896] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.051 [2024-12-09 12:04:02.901708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.051 [2024-12-09 12:04:02.902306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.051 [2024-12-09 12:04:02.902337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.051 [2024-12-09 12:04:02.902346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.051 [2024-12-09 12:04:02.902512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.051 [2024-12-09 12:04:02.902672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.051 [2024-12-09 12:04:02.902679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.051 [2024-12-09 12:04:02.902686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.051 [2024-12-09 12:04:02.902691] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.051 [2024-12-09 12:04:02.914370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.051 [2024-12-09 12:04:02.914975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.051 [2024-12-09 12:04:02.915005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.051 [2024-12-09 12:04:02.915015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.051 [2024-12-09 12:04:02.915183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.051 [2024-12-09 12:04:02.915336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.051 [2024-12-09 12:04:02.915342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.051 [2024-12-09 12:04:02.915347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.051 [2024-12-09 12:04:02.915353] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.051 [2024-12-09 12:04:02.921296] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:28:55.051 [2024-12-09 12:04:02.921351] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.051 [2024-12-09 12:04:02.927025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.051 [2024-12-09 12:04:02.927530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.051 [2024-12-09 12:04:02.927545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.051 [2024-12-09 12:04:02.927551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.051 [2024-12-09 12:04:02.927708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.051 [2024-12-09 12:04:02.927863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.051 [2024-12-09 12:04:02.927868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.051 [2024-12-09 12:04:02.927873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.051 [2024-12-09 12:04:02.927878] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.313 [2024-12-09 12:04:02.939689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.313 [2024-12-09 12:04:02.940230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.314 [2024-12-09 12:04:02.940260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.314 [2024-12-09 12:04:02.940270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.314 [2024-12-09 12:04:02.940436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.314 [2024-12-09 12:04:02.940589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.314 [2024-12-09 12:04:02.940596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.314 [2024-12-09 12:04:02.940602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.314 [2024-12-09 12:04:02.940608] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.314 [2024-12-09 12:04:02.952289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.314 [2024-12-09 12:04:02.952914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.314 [2024-12-09 12:04:02.952945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.314 [2024-12-09 12:04:02.952954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.314 [2024-12-09 12:04:02.953119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.314 [2024-12-09 12:04:02.953272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.314 [2024-12-09 12:04:02.953278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.314 [2024-12-09 12:04:02.953284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.314 [2024-12-09 12:04:02.953289] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.314 [2024-12-09 12:04:02.964933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.314 [2024-12-09 12:04:02.965500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.314 [2024-12-09 12:04:02.965530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.314 [2024-12-09 12:04:02.965539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.314 [2024-12-09 12:04:02.965711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.314 [2024-12-09 12:04:02.965865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.314 [2024-12-09 12:04:02.965871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.314 [2024-12-09 12:04:02.965885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.314 [2024-12-09 12:04:02.965891] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.314 [2024-12-09 12:04:02.977568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.314 [2024-12-09 12:04:02.978170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.314 [2024-12-09 12:04:02.978200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.314 [2024-12-09 12:04:02.978210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.314 [2024-12-09 12:04:02.978378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.314 [2024-12-09 12:04:02.978531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.314 [2024-12-09 12:04:02.978537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.314 [2024-12-09 12:04:02.978542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.314 [2024-12-09 12:04:02.978548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.314 [2024-12-09 12:04:02.990228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.314 [2024-12-09 12:04:02.990856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.314 [2024-12-09 12:04:02.990886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.314 [2024-12-09 12:04:02.990896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.314 [2024-12-09 12:04:02.991062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.314 [2024-12-09 12:04:02.991215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.314 [2024-12-09 12:04:02.991223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.314 [2024-12-09 12:04:02.991228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.314 [2024-12-09 12:04:02.991234] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.314 [2024-12-09 12:04:03.002909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.314 [2024-12-09 12:04:03.003485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.314 [2024-12-09 12:04:03.003515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.314 [2024-12-09 12:04:03.003525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.314 [2024-12-09 12:04:03.003697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.314 [2024-12-09 12:04:03.003851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.314 [2024-12-09 12:04:03.003859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.314 [2024-12-09 12:04:03.003864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.314 [2024-12-09 12:04:03.003870] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.314 [2024-12-09 12:04:03.012835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:55.314 [2024-12-09 12:04:03.015542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.314 [2024-12-09 12:04:03.016125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.314 [2024-12-09 12:04:03.016156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.314 [2024-12-09 12:04:03.016165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.314 [2024-12-09 12:04:03.016330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.314 [2024-12-09 12:04:03.016484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.314 [2024-12-09 12:04:03.016491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.314 [2024-12-09 12:04:03.016498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.314 [2024-12-09 12:04:03.016503] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.314 [2024-12-09 12:04:03.028186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.314 [2024-12-09 12:04:03.028848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.314 [2024-12-09 12:04:03.028879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.314 [2024-12-09 12:04:03.028888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.314 [2024-12-09 12:04:03.029054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.314 [2024-12-09 12:04:03.029207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.314 [2024-12-09 12:04:03.029213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.314 [2024-12-09 12:04:03.029219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.314 [2024-12-09 12:04:03.029225] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.314 [2024-12-09 12:04:03.040906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.314 [2024-12-09 12:04:03.041506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.314 [2024-12-09 12:04:03.041510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.314 [2024-12-09 12:04:03.041533] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.314 [2024-12-09 12:04:03.041537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.314 [2024-12-09 12:04:03.041540] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.314 [2024-12-09 12:04:03.041546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.314 [2024-12-09 12:04:03.041547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.314 [2024-12-09 12:04:03.041551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.314 [2024-12-09 12:04:03.041719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.314 [2024-12-09 12:04:03.041873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.314 [2024-12-09 12:04:03.041879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.314 [2024-12-09 12:04:03.041890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.314 [2024-12-09 12:04:03.041896] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.314 [2024-12-09 12:04:03.042635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.314 [2024-12-09 12:04:03.042802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:55.314 [2024-12-09 12:04:03.042893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.314 [2024-12-09 12:04:03.053587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.314 [2024-12-09 12:04:03.054150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.314 [2024-12-09 12:04:03.054182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.315 [2024-12-09 12:04:03.054192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.315 [2024-12-09 12:04:03.054358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.315 [2024-12-09 12:04:03.054511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.315 [2024-12-09 12:04:03.054518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.315 [2024-12-09 12:04:03.054524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.315 [2024-12-09 12:04:03.054530] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.315 [2024-12-09 12:04:03.066199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.315 [2024-12-09 12:04:03.066743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.315 [2024-12-09 12:04:03.066775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.315 [2024-12-09 12:04:03.066785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.315 [2024-12-09 12:04:03.066954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.315 [2024-12-09 12:04:03.067107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.315 [2024-12-09 12:04:03.067114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.315 [2024-12-09 12:04:03.067119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.315 [2024-12-09 12:04:03.067125] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.315 [2024-12-09 12:04:03.078796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.315 [2024-12-09 12:04:03.079372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.315 [2024-12-09 12:04:03.079404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.315 [2024-12-09 12:04:03.079413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.315 [2024-12-09 12:04:03.079580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.315 [2024-12-09 12:04:03.079738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.315 [2024-12-09 12:04:03.079745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.315 [2024-12-09 12:04:03.079756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.315 [2024-12-09 12:04:03.079762] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.315 [2024-12-09 12:04:03.091434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.315 [2024-12-09 12:04:03.091992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.315 [2024-12-09 12:04:03.092022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.315 [2024-12-09 12:04:03.092032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.315 [2024-12-09 12:04:03.092198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.315 [2024-12-09 12:04:03.092352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.315 [2024-12-09 12:04:03.092358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.315 [2024-12-09 12:04:03.092363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.315 [2024-12-09 12:04:03.092369] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.315 [2024-12-09 12:04:03.104048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.315 [2024-12-09 12:04:03.104608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.315 [2024-12-09 12:04:03.104645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.315 [2024-12-09 12:04:03.104654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.315 [2024-12-09 12:04:03.104820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.315 [2024-12-09 12:04:03.104973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.315 [2024-12-09 12:04:03.104979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.315 [2024-12-09 12:04:03.104985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.315 [2024-12-09 12:04:03.104991] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.315 [2024-12-09 12:04:03.116673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.315 [2024-12-09 12:04:03.117259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.315 [2024-12-09 12:04:03.117289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.315 [2024-12-09 12:04:03.117298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.315 [2024-12-09 12:04:03.117464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.315 [2024-12-09 12:04:03.117617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.315 [2024-12-09 12:04:03.117624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.315 [2024-12-09 12:04:03.117629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.315 [2024-12-09 12:04:03.117635] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.315 [2024-12-09 12:04:03.129310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.315 [2024-12-09 12:04:03.129953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.315 [2024-12-09 12:04:03.129985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.315 [2024-12-09 12:04:03.129994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.315 [2024-12-09 12:04:03.130160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.315 [2024-12-09 12:04:03.130313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.315 [2024-12-09 12:04:03.130320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.315 [2024-12-09 12:04:03.130325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.315 [2024-12-09 12:04:03.130330] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.315 [2024-12-09 12:04:03.141997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.315 [2024-12-09 12:04:03.142498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.315 [2024-12-09 12:04:03.142514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.315 [2024-12-09 12:04:03.142519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.315 [2024-12-09 12:04:03.142674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.315 [2024-12-09 12:04:03.142825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.315 [2024-12-09 12:04:03.142831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.315 [2024-12-09 12:04:03.142836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.315 [2024-12-09 12:04:03.142841] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.315 [2024-12-09 12:04:03.154655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.315 [2024-12-09 12:04:03.155177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.315 [2024-12-09 12:04:03.155208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.315 [2024-12-09 12:04:03.155217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.315 [2024-12-09 12:04:03.155382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.315 [2024-12-09 12:04:03.155535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.315 [2024-12-09 12:04:03.155543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.315 [2024-12-09 12:04:03.155549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.315 [2024-12-09 12:04:03.155556] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.315 [2024-12-09 12:04:03.167419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.315 [2024-12-09 12:04:03.167839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.315 [2024-12-09 12:04:03.167869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.315 [2024-12-09 12:04:03.167883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.315 [2024-12-09 12:04:03.168049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.315 [2024-12-09 12:04:03.168202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.315 [2024-12-09 12:04:03.168208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.315 [2024-12-09 12:04:03.168214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.315 [2024-12-09 12:04:03.168219] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.315 [2024-12-09 12:04:03.180031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.315 [2024-12-09 12:04:03.180406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.315 [2024-12-09 12:04:03.180421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.315 [2024-12-09 12:04:03.180427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.316 [2024-12-09 12:04:03.180577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.316 [2024-12-09 12:04:03.180731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.316 [2024-12-09 12:04:03.180737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.316 [2024-12-09 12:04:03.180742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.316 [2024-12-09 12:04:03.180747] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.316 [2024-12-09 12:04:03.192689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.316 [2024-12-09 12:04:03.193241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.316 [2024-12-09 12:04:03.193271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.316 [2024-12-09 12:04:03.193280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.316 [2024-12-09 12:04:03.193446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.316 [2024-12-09 12:04:03.193599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.316 [2024-12-09 12:04:03.193606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.316 [2024-12-09 12:04:03.193612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.316 [2024-12-09 12:04:03.193617] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.578 [2024-12-09 12:04:03.205290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.578 [2024-12-09 12:04:03.205795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.578 [2024-12-09 12:04:03.205826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.578 [2024-12-09 12:04:03.205835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.578 [2024-12-09 12:04:03.206003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.578 [2024-12-09 12:04:03.206160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.578 [2024-12-09 12:04:03.206166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.578 [2024-12-09 12:04:03.206172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.578 [2024-12-09 12:04:03.206178] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.578 [2024-12-09 12:04:03.218007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.578 [2024-12-09 12:04:03.218480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.578 [2024-12-09 12:04:03.218495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.578 [2024-12-09 12:04:03.218501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.578 [2024-12-09 12:04:03.218656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.578 [2024-12-09 12:04:03.218807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.578 [2024-12-09 12:04:03.218813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.578 [2024-12-09 12:04:03.218818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.578 [2024-12-09 12:04:03.218823] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.578 [2024-12-09 12:04:03.230615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.578 [2024-12-09 12:04:03.230980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.578 [2024-12-09 12:04:03.230993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.578 [2024-12-09 12:04:03.230998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.578 [2024-12-09 12:04:03.231149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.578 [2024-12-09 12:04:03.231298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.578 [2024-12-09 12:04:03.231304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.578 [2024-12-09 12:04:03.231309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.578 [2024-12-09 12:04:03.231314] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.578 [2024-12-09 12:04:03.243255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.578 [2024-12-09 12:04:03.243718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.578 [2024-12-09 12:04:03.243748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.578 [2024-12-09 12:04:03.243757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.578 [2024-12-09 12:04:03.243926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.578 [2024-12-09 12:04:03.244079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.578 [2024-12-09 12:04:03.244085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.578 [2024-12-09 12:04:03.244095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.578 [2024-12-09 12:04:03.244100] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.578 [2024-12-09 12:04:03.255933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.578 [2024-12-09 12:04:03.256394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.578 [2024-12-09 12:04:03.256409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.578 [2024-12-09 12:04:03.256414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.578 [2024-12-09 12:04:03.256564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.578 [2024-12-09 12:04:03.256720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.578 [2024-12-09 12:04:03.256726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.578 [2024-12-09 12:04:03.256731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.578 [2024-12-09 12:04:03.256736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.578 [2024-12-09 12:04:03.268543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.578 [2024-12-09 12:04:03.269178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.578 [2024-12-09 12:04:03.269209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.578 [2024-12-09 12:04:03.269217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.578 [2024-12-09 12:04:03.269384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.578 [2024-12-09 12:04:03.269536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.578 [2024-12-09 12:04:03.269543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.578 [2024-12-09 12:04:03.269548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.578 [2024-12-09 12:04:03.269554] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.578 [2024-12-09 12:04:03.281223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.578 [2024-12-09 12:04:03.281730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.578 [2024-12-09 12:04:03.281761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.579 [2024-12-09 12:04:03.281770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.579 [2024-12-09 12:04:03.281939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.579 [2024-12-09 12:04:03.282091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.579 [2024-12-09 12:04:03.282098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.579 [2024-12-09 12:04:03.282104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.579 [2024-12-09 12:04:03.282109] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.579 [2024-12-09 12:04:03.293936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.579 [2024-12-09 12:04:03.294396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.579 [2024-12-09 12:04:03.294411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.579 [2024-12-09 12:04:03.294416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.579 [2024-12-09 12:04:03.294566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.579 [2024-12-09 12:04:03.294721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.579 [2024-12-09 12:04:03.294727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.579 [2024-12-09 12:04:03.294732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.579 [2024-12-09 12:04:03.294737] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.579 [2024-12-09 12:04:03.306538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.579 [2024-12-09 12:04:03.307117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.579 [2024-12-09 12:04:03.307148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.579 [2024-12-09 12:04:03.307157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.579 [2024-12-09 12:04:03.307322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.579 [2024-12-09 12:04:03.307476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.579 [2024-12-09 12:04:03.307482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.579 [2024-12-09 12:04:03.307487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.579 [2024-12-09 12:04:03.307493] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.579 [2024-12-09 12:04:03.319167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.579 [2024-12-09 12:04:03.319633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.579 [2024-12-09 12:04:03.319653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.579 [2024-12-09 12:04:03.319659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.579 [2024-12-09 12:04:03.319810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.579 [2024-12-09 12:04:03.319960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.579 [2024-12-09 12:04:03.319965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.579 [2024-12-09 12:04:03.319970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.579 [2024-12-09 12:04:03.319975] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.579 [2024-12-09 12:04:03.331776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.579 [2024-12-09 12:04:03.332334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.579 [2024-12-09 12:04:03.332365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.579 [2024-12-09 12:04:03.332378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.579 [2024-12-09 12:04:03.332543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.579 [2024-12-09 12:04:03.332702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.579 [2024-12-09 12:04:03.332709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.579 [2024-12-09 12:04:03.332715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.579 [2024-12-09 12:04:03.332720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.579 [2024-12-09 12:04:03.344385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.579 [2024-12-09 12:04:03.344994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.579 [2024-12-09 12:04:03.345024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.579 [2024-12-09 12:04:03.345033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.579 [2024-12-09 12:04:03.345200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.579 [2024-12-09 12:04:03.345352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.579 [2024-12-09 12:04:03.345359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.579 [2024-12-09 12:04:03.345364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.579 [2024-12-09 12:04:03.345370] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.579 4306.33 IOPS, 16.82 MiB/s [2024-12-09T11:04:03.465Z] [2024-12-09 12:04:03.357041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.579 [2024-12-09 12:04:03.357621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.579 [2024-12-09 12:04:03.357642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.579 [2024-12-09 12:04:03.357648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.579 [2024-12-09 12:04:03.357799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.579 [2024-12-09 12:04:03.357950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.579 [2024-12-09 12:04:03.357955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.579 [2024-12-09 12:04:03.357961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.579 [2024-12-09 12:04:03.357965] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.579 [2024-12-09 12:04:03.369679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.579 [2024-12-09 12:04:03.370276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.579 [2024-12-09 12:04:03.370307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.579 [2024-12-09 12:04:03.370316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.579 [2024-12-09 12:04:03.370482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.579 [2024-12-09 12:04:03.370645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.579 [2024-12-09 12:04:03.370652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.579 [2024-12-09 12:04:03.370657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.579 [2024-12-09 12:04:03.370663] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.579 [2024-12-09 12:04:03.382338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.579 [2024-12-09 12:04:03.382972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.579 [2024-12-09 12:04:03.383003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.579 [2024-12-09 12:04:03.383012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.579 [2024-12-09 12:04:03.383178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.579 [2024-12-09 12:04:03.383331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.579 [2024-12-09 12:04:03.383338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.579 [2024-12-09 12:04:03.383343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.579 [2024-12-09 12:04:03.383348] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.579 [2024-12-09 12:04:03.395034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.579 [2024-12-09 12:04:03.395401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.579 [2024-12-09 12:04:03.395416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.579 [2024-12-09 12:04:03.395421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.579 [2024-12-09 12:04:03.395571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.579 [2024-12-09 12:04:03.395726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.579 [2024-12-09 12:04:03.395733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.579 [2024-12-09 12:04:03.395737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.579 [2024-12-09 12:04:03.395742] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.579 [2024-12-09 12:04:03.407699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.579 [2024-12-09 12:04:03.408263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.579 [2024-12-09 12:04:03.408293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.579 [2024-12-09 12:04:03.408303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.580 [2024-12-09 12:04:03.408468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.580 [2024-12-09 12:04:03.408621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.580 [2024-12-09 12:04:03.408628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.580 [2024-12-09 12:04:03.408643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.580 [2024-12-09 12:04:03.408650] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.580 [2024-12-09 12:04:03.420335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.580 [2024-12-09 12:04:03.420938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.580 [2024-12-09 12:04:03.420969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.580 [2024-12-09 12:04:03.420978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.580 [2024-12-09 12:04:03.421144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.580 [2024-12-09 12:04:03.421298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.580 [2024-12-09 12:04:03.421305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.580 [2024-12-09 12:04:03.421310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.580 [2024-12-09 12:04:03.421316] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.580 [2024-12-09 12:04:03.432993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.580 [2024-12-09 12:04:03.433344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.580 [2024-12-09 12:04:03.433359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.580 [2024-12-09 12:04:03.433365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.580 [2024-12-09 12:04:03.433516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.580 [2024-12-09 12:04:03.433671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.580 [2024-12-09 12:04:03.433678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.580 [2024-12-09 12:04:03.433683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.580 [2024-12-09 12:04:03.433689] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.580 [2024-12-09 12:04:03.445643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.580 [2024-12-09 12:04:03.446031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.580 [2024-12-09 12:04:03.446044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.580 [2024-12-09 12:04:03.446050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.580 [2024-12-09 12:04:03.446200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.580 [2024-12-09 12:04:03.446350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.580 [2024-12-09 12:04:03.446356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.580 [2024-12-09 12:04:03.446361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.580 [2024-12-09 12:04:03.446366] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.580 [2024-12-09 12:04:03.458333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.580 [2024-12-09 12:04:03.458954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.580 [2024-12-09 12:04:03.458985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.580 [2024-12-09 12:04:03.458994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.580 [2024-12-09 12:04:03.459160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.580 [2024-12-09 12:04:03.459313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.580 [2024-12-09 12:04:03.459319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.580 [2024-12-09 12:04:03.459324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.580 [2024-12-09 12:04:03.459330] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.842 [2024-12-09 12:04:03.471010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.842 [2024-12-09 12:04:03.471620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.842 [2024-12-09 12:04:03.471656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.842 [2024-12-09 12:04:03.471666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.842 [2024-12-09 12:04:03.471834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.842 [2024-12-09 12:04:03.471987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.842 [2024-12-09 12:04:03.471994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.842 [2024-12-09 12:04:03.472000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.842 [2024-12-09 12:04:03.472005] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.842 [2024-12-09 12:04:03.483604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.842 [2024-12-09 12:04:03.483971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.842 [2024-12-09 12:04:03.483987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.842 [2024-12-09 12:04:03.483992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.842 [2024-12-09 12:04:03.484143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.842 [2024-12-09 12:04:03.484293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.842 [2024-12-09 12:04:03.484299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.842 [2024-12-09 12:04:03.484303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.842 [2024-12-09 12:04:03.484308] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.842 [2024-12-09 12:04:03.496264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.842 [2024-12-09 12:04:03.496864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.842 [2024-12-09 12:04:03.496895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.842 [2024-12-09 12:04:03.496909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.842 [2024-12-09 12:04:03.497075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.842 [2024-12-09 12:04:03.497228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.842 [2024-12-09 12:04:03.497235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.842 [2024-12-09 12:04:03.497240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.842 [2024-12-09 12:04:03.497246] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.842 [2024-12-09 12:04:03.508935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.842 [2024-12-09 12:04:03.509317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.842 [2024-12-09 12:04:03.509333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.842 [2024-12-09 12:04:03.509339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.842 [2024-12-09 12:04:03.509489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.842 [2024-12-09 12:04:03.509643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.843 [2024-12-09 12:04:03.509649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.843 [2024-12-09 12:04:03.509654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.843 [2024-12-09 12:04:03.509659] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.843 [2024-12-09 12:04:03.521621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.843 [2024-12-09 12:04:03.522079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.843 [2024-12-09 12:04:03.522092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.843 [2024-12-09 12:04:03.522098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.843 [2024-12-09 12:04:03.522247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.843 [2024-12-09 12:04:03.522397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.843 [2024-12-09 12:04:03.522403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.843 [2024-12-09 12:04:03.522408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.843 [2024-12-09 12:04:03.522412] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.843 [2024-12-09 12:04:03.534226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.843 [2024-12-09 12:04:03.534736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.843 [2024-12-09 12:04:03.534767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.843 [2024-12-09 12:04:03.534777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.843 [2024-12-09 12:04:03.534946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.843 [2024-12-09 12:04:03.535103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.843 [2024-12-09 12:04:03.535110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.843 [2024-12-09 12:04:03.535115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.843 [2024-12-09 12:04:03.535121] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.843 [2024-12-09 12:04:03.546948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.843 [2024-12-09 12:04:03.547445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.843 [2024-12-09 12:04:03.547460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.843 [2024-12-09 12:04:03.547466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.843 [2024-12-09 12:04:03.547616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.843 [2024-12-09 12:04:03.547862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.843 [2024-12-09 12:04:03.547869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.843 [2024-12-09 12:04:03.547875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.843 [2024-12-09 12:04:03.547879] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.843 [2024-12-09 12:04:03.559563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.843 [2024-12-09 12:04:03.560092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.843 [2024-12-09 12:04:03.560106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.843 [2024-12-09 12:04:03.560112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.843 [2024-12-09 12:04:03.560261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.843 [2024-12-09 12:04:03.560411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.843 [2024-12-09 12:04:03.560417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.843 [2024-12-09 12:04:03.560423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.843 [2024-12-09 12:04:03.560428] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.843 [2024-12-09 12:04:03.572241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.843 [2024-12-09 12:04:03.572702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.843 [2024-12-09 12:04:03.572716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.843 [2024-12-09 12:04:03.572721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.843 [2024-12-09 12:04:03.572871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.843 [2024-12-09 12:04:03.573021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.843 [2024-12-09 12:04:03.573027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.843 [2024-12-09 12:04:03.573035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.843 [2024-12-09 12:04:03.573040] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.843 [2024-12-09 12:04:03.584854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.843 [2024-12-09 12:04:03.585447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.843 [2024-12-09 12:04:03.585478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.843 [2024-12-09 12:04:03.585487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.843 [2024-12-09 12:04:03.585660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.843 [2024-12-09 12:04:03.585813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.843 [2024-12-09 12:04:03.585820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.843 [2024-12-09 12:04:03.585825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.843 [2024-12-09 12:04:03.585831] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.843 [2024-12-09 12:04:03.597509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.843 [2024-12-09 12:04:03.598103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.843 [2024-12-09 12:04:03.598133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.843 [2024-12-09 12:04:03.598143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.843 [2024-12-09 12:04:03.598309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.843 [2024-12-09 12:04:03.598461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.843 [2024-12-09 12:04:03.598469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.843 [2024-12-09 12:04:03.598474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.843 [2024-12-09 12:04:03.598479] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.843 [2024-12-09 12:04:03.610165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.843 [2024-12-09 12:04:03.610707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.843 [2024-12-09 12:04:03.610738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.843 [2024-12-09 12:04:03.610747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.843 [2024-12-09 12:04:03.610916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.843 [2024-12-09 12:04:03.611069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.843 [2024-12-09 12:04:03.611075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.843 [2024-12-09 12:04:03.611081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.843 [2024-12-09 12:04:03.611087] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.843 [2024-12-09 12:04:03.622795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.843 [2024-12-09 12:04:03.623352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.843 [2024-12-09 12:04:03.623383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.843 [2024-12-09 12:04:03.623392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.843 [2024-12-09 12:04:03.623558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.843 [2024-12-09 12:04:03.623718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.843 [2024-12-09 12:04:03.623725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.843 [2024-12-09 12:04:03.623730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.843 [2024-12-09 12:04:03.623736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.843 [2024-12-09 12:04:03.635419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.843 [2024-12-09 12:04:03.636044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.843 [2024-12-09 12:04:03.636075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.843 [2024-12-09 12:04:03.636084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.843 [2024-12-09 12:04:03.636250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.843 [2024-12-09 12:04:03.636403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.843 [2024-12-09 12:04:03.636410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.843 [2024-12-09 12:04:03.636415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.844 [2024-12-09 12:04:03.636421] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.844 [2024-12-09 12:04:03.648117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.844 [2024-12-09 12:04:03.648601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.844 [2024-12-09 12:04:03.648632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.844 [2024-12-09 12:04:03.648646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.844 [2024-12-09 12:04:03.648813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.844 [2024-12-09 12:04:03.648966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.844 [2024-12-09 12:04:03.648973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.844 [2024-12-09 12:04:03.648978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.844 [2024-12-09 12:04:03.648983] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.844 [2024-12-09 12:04:03.660809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.844 [2024-12-09 12:04:03.661176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.844 [2024-12-09 12:04:03.661192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.844 [2024-12-09 12:04:03.661202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.844 [2024-12-09 12:04:03.661352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.844 [2024-12-09 12:04:03.661503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.844 [2024-12-09 12:04:03.661510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.844 [2024-12-09 12:04:03.661515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.844 [2024-12-09 12:04:03.661520] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.844 [2024-12-09 12:04:03.673476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.844 [2024-12-09 12:04:03.673939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.844 [2024-12-09 12:04:03.673953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.844 [2024-12-09 12:04:03.673959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.844 [2024-12-09 12:04:03.674110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.844 [2024-12-09 12:04:03.674261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.844 [2024-12-09 12:04:03.674267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.844 [2024-12-09 12:04:03.674272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.844 [2024-12-09 12:04:03.674276] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.844 [2024-12-09 12:04:03.686123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.844 [2024-12-09 12:04:03.686595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.844 [2024-12-09 12:04:03.686607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.844 [2024-12-09 12:04:03.686613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.844 [2024-12-09 12:04:03.686767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.844 [2024-12-09 12:04:03.686917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.844 [2024-12-09 12:04:03.686923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.844 [2024-12-09 12:04:03.686927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.844 [2024-12-09 12:04:03.686932] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.844 [2024-12-09 12:04:03.698736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.844 [2024-12-09 12:04:03.699204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.844 [2024-12-09 12:04:03.699216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.844 [2024-12-09 12:04:03.699222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.844 [2024-12-09 12:04:03.699371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.844 [2024-12-09 12:04:03.699523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.844 [2024-12-09 12:04:03.699529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.844 [2024-12-09 12:04:03.699533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.844 [2024-12-09 12:04:03.699538] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.844 [2024-12-09 12:04:03.711346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.844 [2024-12-09 12:04:03.711787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.844 [2024-12-09 12:04:03.711818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.844 [2024-12-09 12:04:03.711827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.844 [2024-12-09 12:04:03.711993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.844 [2024-12-09 12:04:03.712146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.844 [2024-12-09 12:04:03.712152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.844 [2024-12-09 12:04:03.712158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.844 [2024-12-09 12:04:03.712164] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:55.844 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:55.844 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:55.844 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:55.844 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:55.844 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:55.844 [2024-12-09 12:04:03.724002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:55.844 [2024-12-09 12:04:03.724600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.844 [2024-12-09 12:04:03.724630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:55.844 [2024-12-09 12:04:03.724645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:55.844 [2024-12-09 12:04:03.724812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:55.844 [2024-12-09 12:04:03.724965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:55.844 [2024-12-09 12:04:03.724972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:55.844 [2024-12-09 12:04:03.724978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:55.844 [2024-12-09 12:04:03.724985] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.105 [2024-12-09 12:04:03.736676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.105 [2024-12-09 12:04:03.737158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.105 [2024-12-09 12:04:03.737173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:56.105 [2024-12-09 12:04:03.737179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:56.105 [2024-12-09 12:04:03.737333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:56.105 [2024-12-09 12:04:03.737483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.105 [2024-12-09 12:04:03.737489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.105 [2024-12-09 12:04:03.737494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.105 [2024-12-09 12:04:03.737499] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.105 [2024-12-09 12:04:03.749322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.105 [2024-12-09 12:04:03.749977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.105 [2024-12-09 12:04:03.750007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:56.105 [2024-12-09 12:04:03.750016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:56.105 [2024-12-09 12:04:03.750182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:56.105 [2024-12-09 12:04:03.750335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.105 [2024-12-09 12:04:03.750342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.105 [2024-12-09 12:04:03.750347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.105 [2024-12-09 12:04:03.750353] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.105 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:56.105 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:56.105 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.106 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:56.106 [2024-12-09 12:04:03.762044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.106 [2024-12-09 12:04:03.762404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.106 [2024-12-09 12:04:03.762419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:56.106 [2024-12-09 12:04:03.762425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:56.106 [2024-12-09 12:04:03.762575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:56.106 [2024-12-09 12:04:03.762731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.106 [2024-12-09 12:04:03.762737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.106 [2024-12-09 12:04:03.762742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.106 [2024-12-09 12:04:03.762747] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.106 [2024-12-09 12:04:03.763702] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.106 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.106 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:56.106 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.106 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:56.106 [2024-12-09 12:04:03.774706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.106 [2024-12-09 12:04:03.775174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.106 [2024-12-09 12:04:03.775187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:56.106 [2024-12-09 12:04:03.775192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:56.106 [2024-12-09 12:04:03.775342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:56.106 [2024-12-09 12:04:03.775491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.106 [2024-12-09 12:04:03.775498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.106 [2024-12-09 12:04:03.775503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.106 [2024-12-09 12:04:03.775507] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.106 [2024-12-09 12:04:03.787314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.106 [2024-12-09 12:04:03.787888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.106 [2024-12-09 12:04:03.787920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:56.106 [2024-12-09 12:04:03.787929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:56.106 [2024-12-09 12:04:03.788094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:56.106 [2024-12-09 12:04:03.788247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.106 [2024-12-09 12:04:03.788253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.106 [2024-12-09 12:04:03.788259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.106 [2024-12-09 12:04:03.788265] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.106 Malloc0 00:28:56.106 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.106 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:56.106 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.106 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:56.106 [2024-12-09 12:04:03.799942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.106 [2024-12-09 12:04:03.800320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.106 [2024-12-09 12:04:03.800335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:56.106 [2024-12-09 12:04:03.800341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:56.106 [2024-12-09 12:04:03.800491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:56.106 [2024-12-09 12:04:03.800646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.106 [2024-12-09 12:04:03.800652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.106 [2024-12-09 12:04:03.800662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.106 [2024-12-09 12:04:03.800667] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.106 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.106 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:56.106 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.106 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:56.106 [2024-12-09 12:04:03.812535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.106 [2024-12-09 12:04:03.812941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.106 [2024-12-09 12:04:03.812956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:56.106 [2024-12-09 12:04:03.812962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:56.106 [2024-12-09 12:04:03.813111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:56.106 [2024-12-09 12:04:03.813261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.106 [2024-12-09 12:04:03.813267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.106 [2024-12-09 12:04:03.813272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.106 [2024-12-09 12:04:03.813277] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.106 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.106 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:56.106 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.106 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:56.106 [2024-12-09 12:04:03.825232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.106 [2024-12-09 12:04:03.825882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.106 [2024-12-09 12:04:03.825914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bc20 with addr=10.0.0.2, port=4420 00:28:56.106 [2024-12-09 12:04:03.825923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bc20 is same with the state(6) to be set 00:28:56.106 [2024-12-09 12:04:03.826066] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:56.106 [2024-12-09 12:04:03.826089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bc20 (9): Bad file descriptor 00:28:56.106 [2024-12-09 12:04:03.826242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:56.106 [2024-12-09 12:04:03.826249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:56.106 [2024-12-09 12:04:03.826254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:56.106 [2024-12-09 12:04:03.826260] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:56.106 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.106 12:04:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 227298 00:28:56.106 [2024-12-09 12:04:03.837941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:56.106 [2024-12-09 12:04:03.865083] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:57.616 4547.86 IOPS, 17.77 MiB/s [2024-12-09T11:04:06.443Z] 5599.62 IOPS, 21.87 MiB/s [2024-12-09T11:04:07.384Z] 6402.22 IOPS, 25.01 MiB/s [2024-12-09T11:04:08.766Z] 7051.00 IOPS, 27.54 MiB/s [2024-12-09T11:04:09.707Z] 7600.00 IOPS, 29.69 MiB/s [2024-12-09T11:04:10.648Z] 8027.75 IOPS, 31.36 MiB/s [2024-12-09T11:04:11.588Z] 8407.31 IOPS, 32.84 MiB/s [2024-12-09T11:04:12.531Z] 8742.71 IOPS, 34.15 MiB/s 00:29:04.645 Latency(us) 00:29:04.645 [2024-12-09T11:04:12.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.645 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:04.645 Verification LBA range: start 0x0 length 0x4000 00:29:04.645 Nvme1n1 : 15.00 9009.55 35.19 13422.50 0.00 5687.96 563.20 14636.37 00:29:04.645 [2024-12-09T11:04:12.531Z] =================================================================================================================== 00:29:04.645 [2024-12-09T11:04:12.531Z] Total : 9009.55 35.19 13422.50 0.00 5687.96 563.20 14636.37 00:29:04.645 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:04.645 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:04.645 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.645 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.645 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.645 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:04.645 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:04.645 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:04.645 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # sync 00:29:04.645 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:29:04.645 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # set +e 00:29:04.645 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # for i in {1..20} 00:29:04.645 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:29:04.645 rmmod nvme_tcp 00:29:04.645 rmmod nvme_fabrics 00:29:04.645 rmmod nvme_keyring 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # set -e 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@130 -- # return 0 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@513 -- # '[' -n 228455 ']' 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # killprocess 228455 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 228455 ']' 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 228455 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 228455 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 228455' 00:29:04.906 killing process with pid 228455 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 228455 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 228455 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # iptr 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-save 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-restore 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # remove_spdk_ns 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.906 12:04:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.457 12:04:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:29:07.457 00:29:07.457 real 0m28.161s 00:29:07.457 user 1m3.528s 00:29:07.457 sys 0m7.579s 00:29:07.457 12:04:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.457 12:04:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.457 ************************************ 00:29:07.457 END TEST nvmf_bdevperf 00:29:07.457 ************************************ 00:29:07.457 12:04:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:07.457 12:04:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:07.457 12:04:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.457 12:04:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.457 ************************************ 00:29:07.457 START TEST nvmf_target_disconnect 00:29:07.457 ************************************ 00:29:07.457 12:04:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:07.457 * Looking for test storage... 00:29:07.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:07.457 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:07.457 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:29:07.457 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:07.457 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:07.457 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:07.457 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:07.457 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:07.457 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:07.457 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:07.457 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:07.457 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:07.457 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:07.457 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:07.457 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:07.457 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:07.457 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:07.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.458 --rc genhtml_branch_coverage=1 00:29:07.458 --rc genhtml_function_coverage=1 00:29:07.458 --rc genhtml_legend=1 00:29:07.458 --rc geninfo_all_blocks=1 00:29:07.458 --rc geninfo_unexecuted_blocks=1 00:29:07.458 00:29:07.458 ' 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:07.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.458 --rc genhtml_branch_coverage=1 00:29:07.458 --rc genhtml_function_coverage=1 00:29:07.458 --rc genhtml_legend=1 00:29:07.458 --rc geninfo_all_blocks=1 00:29:07.458 --rc geninfo_unexecuted_blocks=1 00:29:07.458 00:29:07.458 ' 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:07.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.458 --rc genhtml_branch_coverage=1 00:29:07.458 --rc genhtml_function_coverage=1 00:29:07.458 --rc genhtml_legend=1 00:29:07.458 --rc geninfo_all_blocks=1 00:29:07.458 --rc geninfo_unexecuted_blocks=1 00:29:07.458 00:29:07.458 ' 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:07.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.458 --rc genhtml_branch_coverage=1 00:29:07.458 --rc genhtml_function_coverage=1 00:29:07.458 --rc genhtml_legend=1 00:29:07.458 --rc geninfo_all_blocks=1 00:29:07.458 --rc geninfo_unexecuted_blocks=1 00:29:07.458 00:29:07.458 ' 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # : 0 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:29:07.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@56 -- # have_pci_nics=0 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # xtrace_disable 00:29:07.458 12:04:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_devs=() 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_devs 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_net_devs=() 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # pci_drivers=() 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # local -A pci_drivers 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # net_devs=() 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga net_devs 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # e810=() 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga e810 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # x722=() 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga x722 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@323 -- # mlx=() 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@323 -- # local -ga mlx 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:15.607 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:15.607 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.607 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:15.608 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:15.608 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:29:15.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:15.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:29:15.608 00:29:15.608 --- 10.0.0.2 ping statistics --- 00:29:15.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.608 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:15.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:15.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:29:15.608 00:29:15.608 --- 10.0.0.1 ping statistics --- 00:29:15.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.608 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # return 0 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:15.608 ************************************ 00:29:15.608 START TEST nvmf_target_disconnect_tc1 00:29:15.608 ************************************ 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:15.608 [2024-12-09 12:04:22.672725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.608 [2024-12-09 12:04:22.672825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1732ae0 with addr=10.0.0.2, port=4420 00:29:15.608 [2024-12-09 12:04:22.672867] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:15.608 [2024-12-09 12:04:22.672885] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:15.608 [2024-12-09 12:04:22.672893] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:15.608 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:15.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:15.608 Initializing NVMe Controllers 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:15.608 00:29:15.608 real 0m0.140s 00:29:15.608 user 0m0.065s 00:29:15.608 sys 0m0.074s 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:15.608 ************************************ 00:29:15.608 END TEST nvmf_target_disconnect_tc1 00:29:15.608 ************************************ 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:15.608 ************************************ 00:29:15.608 START TEST nvmf_target_disconnect_tc2 00:29:15.608 ************************************ 00:29:15.608 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:29:15.609 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:15.609 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:15.609 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:15.609 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:15.609 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.609 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=234496 00:29:15.609 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 234496 00:29:15.609 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:15.609 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 234496 ']' 00:29:15.609 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.609 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:15.609 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.609 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:15.609 12:04:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.609 [2024-12-09 12:04:22.840888] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:29:15.609 [2024-12-09 12:04:22.840943] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:15.609 [2024-12-09 12:04:22.937667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:15.609 [2024-12-09 12:04:22.989666] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:15.609 [2024-12-09 12:04:22.989718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:15.609 [2024-12-09 12:04:22.989727] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:15.609 [2024-12-09 12:04:22.989734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:15.609 [2024-12-09 12:04:22.989740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:15.609 [2024-12-09 12:04:22.992051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:15.609 [2024-12-09 12:04:22.992213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:15.609 [2024-12-09 12:04:22.992378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:15.609 [2024-12-09 12:04:22.992378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:15.870 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:15.870 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:15.870 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:15.870 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:15.870 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.870 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.870 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:15.870 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.870 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.870 Malloc0 00:29:15.871 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.871 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:15.871 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.871 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.871 [2024-12-09 12:04:23.748984] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:16.132 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.132 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:16.132 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.132 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.132 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.132 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:16.132 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.132 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.132 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.132 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:16.132 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.132 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.132 [2024-12-09 12:04:23.789364] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:16.132 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.132 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:16.132 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.132 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.132 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.132 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=234827 00:29:16.132 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:16.132 12:04:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:18.052 12:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 234496 00:29:18.052 12:04:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Write completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Write completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Write completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Write completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Write completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Write completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Read completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Write completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 Write completed with error (sct=0, sc=8) 00:29:18.052 starting I/O failed 00:29:18.052 [2024-12-09 12:04:25.823509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.052 [2024-12-09 12:04:25.823943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.052 [2024-12-09 12:04:25.823998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.052 qpair failed and we were unable to recover it. 00:29:18.052 [2024-12-09 12:04:25.824349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.824362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.824890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.824933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.825269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.825282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.825464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.825480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.825857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.825900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.826245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.826258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.826600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.826611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.826889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.826901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.827220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.827231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.827401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.827412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.827714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.827726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.828038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.828048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.828342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.828353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.828642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.828654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.828902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.828912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.829196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.829207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.829537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.829548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.829860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.829871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.830113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.830124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.830409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.830420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.830718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.830729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.831074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.831085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.831278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.831289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.831464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.831475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.831678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.831689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.832045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.832056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.832356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.832367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.832634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.832649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.832969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.832980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.833306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.833317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.833607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.833620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.833954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.833965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.834289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.834300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.834580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.834591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.834895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.834907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.835195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.835206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.835546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.835557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.835857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.835868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.836210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.053 [2024-12-09 12:04:25.836221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.053 qpair failed and we were unable to recover it. 00:29:18.053 [2024-12-09 12:04:25.836543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.836554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.836825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.836836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.837148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.837159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.837439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.837450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.837754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.837765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.838109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.838120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.838324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.838334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.838620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.838631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.838840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.838851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.839144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.839156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.839493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.839503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.839826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.839837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.840137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.840148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.840337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.840347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.840489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.840500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.840823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.840833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.841154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.841164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.841496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.841506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.841707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.841716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.842044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.842054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.842686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.842697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.843007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.843017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.843346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.843355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.843657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.843667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.843969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.843978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.844274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.844292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.844611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.844621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.844938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.844951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.845229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.845239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.845417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.845427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.845757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.845769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.846059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.846069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.846367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.846377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.846672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.846683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.847000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.847009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.847170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.847179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.847456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.847466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.847699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.847709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.848023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.848032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.054 [2024-12-09 12:04:25.848366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.054 [2024-12-09 12:04:25.848376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.054 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.848721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.848739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.849018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.849028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.849316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.849327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.849663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.849673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.849965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.849975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.850266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.850275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.850553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.850563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.850888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.850899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.851222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.851232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.851571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.851584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.851919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.851933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.852245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.852257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.852546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.852558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.852846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.852859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.853234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.853246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.853549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.853561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.853878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.853891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.854225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.854237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.854443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.854454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.854810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.854825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.855170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.855182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.855449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.855461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.855753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.855766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.856057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.856070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.856366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.856379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.856704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.856717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.857019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.857031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.857224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.857236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.857565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.857577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.857804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.857817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.858158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.858170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.858508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.858520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.858841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.858854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.859155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.859168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.859484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.859497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.859734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.859747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.860057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.860069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.860442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.860454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.860750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.055 [2024-12-09 12:04:25.860763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.055 qpair failed and we were unable to recover it. 00:29:18.055 [2024-12-09 12:04:25.861115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.861127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.861414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.861427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.861805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.861818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.862118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.862130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.862426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.862438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.862771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.862784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.863073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.863085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.863387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.863406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.863705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.863722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.864028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.864044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.864341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.864364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.864660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.864677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.864972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.864995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.865334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.865350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.865656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.865673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.865905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.865921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.866270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.866286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.866651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.866670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.866994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.867010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.867314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.867331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.867645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.867662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.868027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.868044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.868347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.868364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.868675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.868693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.869010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.869026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.869334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.869350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.869651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.869668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.869985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.870002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.870369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.870386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.870654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.870671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.870993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.871009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.871329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.871345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.871690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.871707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.871949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.871966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.872277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.872297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.872625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.872657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.872955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.872971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.873311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.873328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.873647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.873664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.056 qpair failed and we were unable to recover it. 00:29:18.056 [2024-12-09 12:04:25.873966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.056 [2024-12-09 12:04:25.873982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.874277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.874298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.874617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.874648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.874976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.874997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.875331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.875352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.875665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.875688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.876016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.876037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.876345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.876374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.876763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.876786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.877174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.877195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.877500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.877520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.877828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.877850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.878054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.878074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.878386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.878406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.878724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.878745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.879072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.879093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.879400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.879420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.879755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.879777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.880099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.880120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.880477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.880498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.880808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.880830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.881151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.881172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.881472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.881492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.881813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.881835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.882140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.882161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.882467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.882488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.882811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.882834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.883150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.883171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.883380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.883400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.883748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.883770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.884076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.884096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.884408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.884429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.057 [2024-12-09 12:04:25.884760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.057 [2024-12-09 12:04:25.884782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.057 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.885103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.885123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.885463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.885484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.885794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.885815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.886199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.886221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.886610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.886631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.886974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.886996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.887314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.887335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.887649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.887670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.887985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.888014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.888354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.888382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.888751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.888781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.889134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.889162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.889512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.889539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.889873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.889903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.890239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.890267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.890614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.890652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.891006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.891035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.891368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.891397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.891734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.891764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.892117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.892145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.892502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.892531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.892860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.892890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.893230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.893259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.893602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.893630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.893994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.894023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.894345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.894374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.894714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.894743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.895081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.895109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.895474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.895502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.895882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.895911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.896233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.896272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.896623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.896671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.896997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.897026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.897352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.897380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.897656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.897685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.898038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.898067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.898408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.898437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.898797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.898826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.899181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.058 [2024-12-09 12:04:25.899209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.058 qpair failed and we were unable to recover it. 00:29:18.058 [2024-12-09 12:04:25.899465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.899493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.899818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.899848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.900212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.900240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.900573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.900601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.900957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.900987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.901334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.901364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.901727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.901756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.902117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.902146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.902475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.902505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.902858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.902889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.903220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.903248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.903587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.903615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.903970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.903999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.904263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.904290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.904502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.904530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.904882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.904913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.905269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.905297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.905678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.905707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.906032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.906066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.906403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.906431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.906774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.906804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.907161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.907189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.907531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.907558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.907891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.907920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.908164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.908193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.908545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.908573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.908929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.908958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.909321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.909349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.909702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.909732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.910067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.910096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.910473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.910502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.910854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.910884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.911229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.911258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.911625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.911669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.912001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.912029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.912398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.912426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.912755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.912784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.913154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.913182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.913528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.913557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.059 [2024-12-09 12:04:25.913892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.059 [2024-12-09 12:04:25.913920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.059 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.914325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.914353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.914714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.914743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.914969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.914997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.915337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.915366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.915711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.915740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.916081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.916109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.916455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.916483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.916842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.916871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.917203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.917232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.917564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.917592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.917947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.917976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.918301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.918329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.918661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.918691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.919025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.919054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.919376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.919405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.919752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.919781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.920079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.920106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.920483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.920511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.920911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.920940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.921263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.921293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.921615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.921653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.921987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.922015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.922268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.922295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.922529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.922558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.922920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.922950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.923300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.923328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.923699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.923729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.923990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.924017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.924346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.924374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.924733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.924763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.925109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.925137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.925488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.925516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.925869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.925899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.926266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.926295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.926658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.926688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.927046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.927075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.927486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.927516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.927749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.927778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.928025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.060 [2024-12-09 12:04:25.928053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.060 qpair failed and we were unable to recover it. 00:29:18.060 [2024-12-09 12:04:25.928422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-12-09 12:04:25.928451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-12-09 12:04:25.928801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-12-09 12:04:25.928831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-12-09 12:04:25.929212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-12-09 12:04:25.929242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-12-09 12:04:25.929476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-12-09 12:04:25.929504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-12-09 12:04:25.929809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-12-09 12:04:25.929839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-12-09 12:04:25.930160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-12-09 12:04:25.930189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-12-09 12:04:25.930546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-12-09 12:04:25.930574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-12-09 12:04:25.930907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-12-09 12:04:25.930942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-12-09 12:04:25.931264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-12-09 12:04:25.931293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-12-09 12:04:25.931658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-12-09 12:04:25.931688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.061 [2024-12-09 12:04:25.932019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.061 [2024-12-09 12:04:25.932047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.061 qpair failed and we were unable to recover it. 00:29:18.333 [2024-12-09 12:04:25.932371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.333 [2024-12-09 12:04:25.932400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.333 qpair failed and we were unable to recover it. 00:29:18.333 [2024-12-09 12:04:25.932749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.333 [2024-12-09 12:04:25.932779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.333 qpair failed and we were unable to recover it. 00:29:18.333 [2024-12-09 12:04:25.933113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.333 [2024-12-09 12:04:25.933143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.333 qpair failed and we were unable to recover it. 00:29:18.333 [2024-12-09 12:04:25.933545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.333 [2024-12-09 12:04:25.933574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.333 qpair failed and we were unable to recover it. 00:29:18.333 [2024-12-09 12:04:25.933911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.933941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.934289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.934326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.934657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.934687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.935022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.935050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.935478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.935507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.935834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.935864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.936204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.936233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.936570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.936598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.936944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.936974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.937313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.937341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.937693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.937722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.938051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.938080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.938424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.938453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.938812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.938841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.939089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.939117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.939491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.939520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.939877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.939907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.940259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.940288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.940657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.940686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.941043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.941077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.941425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.941454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.941834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.941864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.942264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.942293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.942655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.942685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.943025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.943052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.943387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.943416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.943743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.943773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.944135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.944171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.944510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.944539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.944875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.944905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.945250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.945279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.945602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.945631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.946000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.946029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.946388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.946417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.946782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.946812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.947138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.947167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.947535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.947563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.947947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.947977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-12-09 12:04:25.948349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.334 [2024-12-09 12:04:25.948377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.948725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.948755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.949112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.949140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.949489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.949516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.949853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.949883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.950230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.950258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.950600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.950627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.950991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.951028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.951345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.951379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.951678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.951707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.952056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.952083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.952442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.952470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.952834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.952864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.953224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.953252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.953601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.953630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.953972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.954000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.954367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.954395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.954745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.954775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.955167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.955195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.955512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.955541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.955874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.955904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.956248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.956276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.956627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.956668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.957064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.957092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.957479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.957507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.957749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.957778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.958124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.958152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.958495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.958523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.958889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.958917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.959154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.959182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.959575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.959603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.959959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.959989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.960358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.960386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.960726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.960756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.961093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.961120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.961479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.961507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.961856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.961886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.962226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.962254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.962593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.962620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-12-09 12:04:25.962983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.335 [2024-12-09 12:04:25.963012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.963356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.963384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.963741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.963770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.964093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.964121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.964480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.964508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.964870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.964901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.965228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.965256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.965609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.965659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.966025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.966054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.966424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.966452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.966805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.966840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.967192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.967220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.967460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.967489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.967883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.967914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.968260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.968288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.968618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.968657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.969033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.969061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.969386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.969415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.969807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.969837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.970175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.970203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.970546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.970575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.970932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.970963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.971346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.971374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.971740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.971769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.972121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.972149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.972403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.972439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.972791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.972821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.973180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.973208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.973575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.973602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.973868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.973902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.974236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.974265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.974623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.974664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.975058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.975086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.975408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.975436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.975778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.975808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.976170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.976199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.976545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.976574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.976930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.976966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.977311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.977339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.977690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.336 [2024-12-09 12:04:25.977719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.336 [2024-12-09 12:04:25.978085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.978114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.978454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.978482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.978855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.978884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.979228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.979256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.979604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.979632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.979981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.980010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.980350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.980378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.980755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.980784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.981126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.981155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.981507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.981536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.981888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.981918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.982244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.982272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.982673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.982703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.983050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.983078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.983418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.983446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.983691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.983719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.984068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.984097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.984438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.984466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.984796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.984825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.985176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.985205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.985549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.985577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.985943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.985972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.986355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.986384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.986750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.986780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.987119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.987153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.987508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.987536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.987867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.987897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.988237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.988266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.988615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.988654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.989003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.989031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.989380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.989408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.989838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.989869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.990187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.990215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.990580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.990609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.990967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.990998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.991356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.991384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.991752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.991782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.992138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.992166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.992512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.992541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.337 [2024-12-09 12:04:25.992872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.337 [2024-12-09 12:04:25.992901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.337 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:25.993140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:25.993168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:25.993414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:25.993446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:25.993813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:25.993844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:25.994200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:25.994228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:25.994645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:25.994676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:25.995042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:25.995072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:25.995423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:25.995450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:25.995773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:25.995803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:25.996171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:25.996199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:25.996430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:25.996457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:25.996807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:25.996836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:25.997191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:25.997220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:25.997679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:25.997711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:25.998028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:25.998056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:25.998457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:25.998486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:25.998851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:25.998881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:25.999114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:25.999145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:25.999539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:25.999567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:25.999894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:25.999924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:26.000293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:26.000324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:26.000666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:26.000695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:26.001022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:26.001050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:26.001405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:26.001434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:26.001783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:26.001813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:26.002159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:26.002187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:26.002407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:26.002435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:26.002807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:26.002837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:26.003190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:26.003217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:26.003584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:26.003612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:26.003972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:26.004001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:26.004348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:26.004376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:26.004711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:26.004740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:26.005115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.338 [2024-12-09 12:04:26.005144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.338 qpair failed and we were unable to recover it. 00:29:18.338 [2024-12-09 12:04:26.005494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.005523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.005851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.005881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.006253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.006281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.006620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.006659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.007060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.007089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.007403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.007431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.007800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.007830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.008175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.008204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.008574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.008611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.008975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.009004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.009333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.009363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.009735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.009766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.010034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.010063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.010422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.010450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.010826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.010856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.011194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.011223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.011548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.011576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.011927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.011957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.012301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.012329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.012612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.012656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.012989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.013018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.013279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.013307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.013627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.013676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.014004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.014033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.014374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.014403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.014701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.014730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.015100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.015128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.015459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.015488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.015858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.015887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.016169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.016197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.016520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.016549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.016789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.016818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.017146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.017175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.017532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.017561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.017908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.017937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.018295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.018324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.018692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.018723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.019085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.019114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.019477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.019513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.019867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.339 [2024-12-09 12:04:26.019897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.339 qpair failed and we were unable to recover it. 00:29:18.339 [2024-12-09 12:04:26.020143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.020172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.020542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.020570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.020819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.020848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.021207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.021236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.021578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.021607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.021955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.021986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.022346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.022380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.022753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.022784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.023146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.023175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.023515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.023544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.023900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.023930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.024270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.024298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.024657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.024686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.024962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.024989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.025327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.025355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.025714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.025743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.026081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.026109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.026468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.026497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.026863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.026893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.027241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.027269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.027621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.027659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.027996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.028024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.028368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.028396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.028749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.028779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.029113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.029142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.029487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.029516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.029858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.029887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.030140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.030168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.030456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.030484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.030874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.030906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.031268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.031296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.031657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.031688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.031997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.032026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.032375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.032411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.032756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.032786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.033155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.033183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.033513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.033541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.033884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.033914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.034256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.034285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.340 qpair failed and we were unable to recover it. 00:29:18.340 [2024-12-09 12:04:26.034649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-12-09 12:04:26.034679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.035015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.035043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.035390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.035419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.035757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.035786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.036160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.036189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.036500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.036530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.036861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.036890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.037232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.037261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.037608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.037646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.037937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.037965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.038317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.038346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.038706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.038736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.039071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.039099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.039465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.039494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.039814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.039843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.040193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.040222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.040599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.040628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.040957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.040995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.041334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.041365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.041729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.041759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.042019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.042047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.042401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.042430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.042786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.042817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.043178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.043207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.043544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.043573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.043941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.043972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.044321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.044351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.044693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.044723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.045082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.045110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.045426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.045454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.045802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.045831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.046081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.046109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.046484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.046513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.046886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.046917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.047262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.047292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.047661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.047704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.048062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.048091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.048441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.048471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.048800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.048831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.049201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-12-09 12:04:26.049229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.341 qpair failed and we were unable to recover it. 00:29:18.341 [2024-12-09 12:04:26.049556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.049585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.049810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.049841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.050194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.050224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.050563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.050593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.050963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.050993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.051350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.051379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.051736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.051767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.052025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.052054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.052390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.052419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.052785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.052815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.053087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.053116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.053475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.053504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.053897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.053928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.054265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.054292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.054536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.054565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.054913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.054944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.055286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.055316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.055694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.055726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.056056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.056085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.056438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.056467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.056804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.056835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.057209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.057237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.057588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.057623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.057979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.058009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.058383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.058412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.058756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.058787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.059167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.059196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.059535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.059563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.059894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.059925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.060267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.060296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.060635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.060674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.061021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.061051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.061426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.061455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.061837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.342 [2024-12-09 12:04:26.061869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.342 qpair failed and we were unable to recover it. 00:29:18.342 [2024-12-09 12:04:26.062258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.062287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.062627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.062679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.063013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.063042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.063380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.063410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.063661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.063693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.064021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.064050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.064400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.064430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.064790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.064819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.065144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.065173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.065533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.065561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.065889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.065918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.066285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.066313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.066674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.066704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.067041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.067071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.067437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.067467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.067810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.067847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.068082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.068110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.068470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.068500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.068841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.068872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.069236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.069264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.069585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.069614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.070007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.070038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.070367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.070404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.070759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.070790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.071143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.071172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.071460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.071490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.071852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.071883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.072228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.072257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.072632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.072673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.073035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.073066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.073391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.073420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.073680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.073711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.074047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.074075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.074329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.074359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.074596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.074626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.074981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.075011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.075345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.075375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.075731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.075761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.076085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.076115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.343 [2024-12-09 12:04:26.076465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.343 [2024-12-09 12:04:26.076494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.343 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.076857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.076888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.077242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.077270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.077610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.077649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.078040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.078071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.078412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.078441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.078820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.078851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.079201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.079230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.079588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.079617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.079858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.079888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.080231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.080260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.080614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.080658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.080993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.081022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.081381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.081410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.081761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.081794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.082123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.082151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.082506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.082536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.082881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.082912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.083266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.083296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.083665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.083696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.084058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.084088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.084445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.084474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.084815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.084845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.085185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.085214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.085562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.085592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.086009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.086040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.086383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.086413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.086751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.086782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.087127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.087157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.087517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.087545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.087891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.087921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.088232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.088262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.088573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.088602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.089003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.089033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.089386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.089416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.089757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.089788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.090147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.090177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.090521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.090550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.090922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.090953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.091298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.344 [2024-12-09 12:04:26.091327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.344 qpair failed and we were unable to recover it. 00:29:18.344 [2024-12-09 12:04:26.091703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.091733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.092086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.092117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.092464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.092495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.092729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.092760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.093128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.093163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.093482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.093511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.093859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.093888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.094243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.094273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.094617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.094657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.094994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.095023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.095375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.095406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.095866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.095897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.096278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.096307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.096654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.096685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.097025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.097054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.097378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.097407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.097758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.097788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.098057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.098085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.098308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.098337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.098690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.098739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.099149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.099179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.099497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.099526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.099868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.099898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.100247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.100276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.100605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.100633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.100964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.100992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.101337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.101367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.101707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.101737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.102113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.102142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.102501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.102530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.102892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.102922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.103263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.103297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.103660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.103690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.104023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.104053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.104493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.104523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.104859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.104890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.105135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.105163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.105505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.105534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.105893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.105923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.345 [2024-12-09 12:04:26.106252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.345 [2024-12-09 12:04:26.106279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.345 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.106674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.106705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.107054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.107083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.107425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.107454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.107793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.107823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.108168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.108197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.108554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.108582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.108934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.108964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.109330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.109358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.109728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.109758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.110122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.110150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.110388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.110416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.110636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.110675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.110922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.110950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.111274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.111302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.111653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.111683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.112017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.112046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.112392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.112422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.112776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.112805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.113216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.113250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.113551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.113579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.113809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.113840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.114170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.114200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.114538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.114567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.114789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.114820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.115201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.115230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.115588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.115617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.116126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.116157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.116500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.116530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.116894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.116924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.117263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.117292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.117721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.117751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.118083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.118157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.118374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.118404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.118741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.118772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.119137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.119167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.119522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.119551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.119891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.119921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.120173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.120201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.120444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.346 [2024-12-09 12:04:26.120472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.346 qpair failed and we were unable to recover it. 00:29:18.346 [2024-12-09 12:04:26.120855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.120885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.121234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.121263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.121593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.121621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.122011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.122040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.122385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.122416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.122771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.122803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.123160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.123190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.123424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.123452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.123798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.123829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.124175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.124204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.124558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.124585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.125017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.125047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.125398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.125427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.125759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.125789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.126054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.126083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.126308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.126337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.126692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.126722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.127062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.127091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.127455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.127484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.127718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.127748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.128063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.128099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.128447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.128476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.128732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.128761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.129194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.129223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.129567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.129596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.129966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.129995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.130373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.130402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.130743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.130774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.131143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.131172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.131543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.131571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.131901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.131932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.132290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.132319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.132672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.132702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.133073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.133102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.133469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.133498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.347 [2024-12-09 12:04:26.133841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.347 [2024-12-09 12:04:26.133872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.347 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.133978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.134007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.134234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.134264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.134633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.134689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.135037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.135065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.135430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.135459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.135820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.135850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.136210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.136239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.136661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.136691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.137035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.137063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.137317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.137345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.137702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.137731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.138094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.138130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.138358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.138388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.138739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.138769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.139120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.139150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.139485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.139514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.139808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.139837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.140210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.140238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.140604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.140633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.140975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.141005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.141233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.141262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.141385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.141418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.141763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.141794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.142039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.142067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.142421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.142450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.142810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.142841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.143195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.143224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.143479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.143507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.143831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.143862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.144213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.144242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.144467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.144496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.144824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.144860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.145210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.145239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.145616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.145656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.145947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.145975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.146321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.146350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.146558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.146586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.147014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.147044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.147372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.348 [2024-12-09 12:04:26.147406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.348 qpair failed and we were unable to recover it. 00:29:18.348 [2024-12-09 12:04:26.147625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.147682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.148018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.148049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.148259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.148288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.148651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.148681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.149023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.149051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.149407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.149436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.149801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.149831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.150181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.150209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.150566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.150595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.150967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.150997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.151344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.151373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.151473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.151502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.151833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.151863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.152206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.152236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.152614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.152652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.153038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.153067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.153433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.153463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.153812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.153843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.154213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.154241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.154590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.154618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.154846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.154876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.155117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.155146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.155520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.155548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.155897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.155927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.156165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.156198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.156534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.156563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.156822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.156852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.157246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.157275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.157587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.157616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.157990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.158020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.158392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.158421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.158777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.158807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.159050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.159080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.159299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.159331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.159713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.159743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.160180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.160209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.160555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.160584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.160851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.160881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.161246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.161274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.349 [2024-12-09 12:04:26.161716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.349 [2024-12-09 12:04:26.161746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.349 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.162124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.162155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.162515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.162544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.162979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.163010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.163344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.163372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.163727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.163757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.164099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.164129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.164500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.164529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.164929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.164959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.165316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.165345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.165724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.165754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.166140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.166169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.166515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.166546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.166894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.166925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.167279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.167308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.167665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.167696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.168051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.168080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.168439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.168468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.168815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.168845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.169213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.169241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.169596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.169625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.169870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.169902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.170129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.170157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.170539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.170568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.170977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.171008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.171342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.171371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.171730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.171760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.172127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.172156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.172514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.172550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.172909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.172938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.173285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.173313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.173676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.173706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.174052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.174080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.174432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.174461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.174813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.174843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.175207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.175235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.175566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.175595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.175906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.175937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.176175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.176204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.176575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.176604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.350 qpair failed and we were unable to recover it. 00:29:18.350 [2024-12-09 12:04:26.176857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.350 [2024-12-09 12:04:26.176887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.177287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.177316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.177571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.177601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.177978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.178010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.178361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.178390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.178754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.178785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.179141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.179169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.179492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.179520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.179877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.179908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.180270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.180300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.180656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.180685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.181028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.181057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.181287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.181315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.181545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.181573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.181893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.181922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.182242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.182277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.182632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.182675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.182966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.182994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.183352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.183382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.183747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.183778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.184146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.184174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.184491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.184520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.184889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.184919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.185167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.185195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.185496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.185524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.185871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.185902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.186231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.186259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.186617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.186656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.186952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.186981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.187328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.187357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.187709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.187739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.187990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.188019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.188361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.188390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.188756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.188786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.189146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.351 [2024-12-09 12:04:26.189174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.351 qpair failed and we were unable to recover it. 00:29:18.351 [2024-12-09 12:04:26.189484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.189513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.189861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.189892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.190231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.190259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.190574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.190603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.190961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.190991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.191349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.191377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.191671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.191701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.192046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.192075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.192432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.192461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.192814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.192844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.193187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.193216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.193572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.193601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.194016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.194046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.194400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.194428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.194782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.194812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.195157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.195187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.195556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.195585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.195929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.195959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.196313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.196342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.196682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.196712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.196961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.196989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.197335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.197365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.197713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.197744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.198180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.198208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.198584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.198613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.198959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.198988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.199291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.199321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.199569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.199597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.199969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.199999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.200355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.200383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.200756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.200787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.201045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.201073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.201432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.201460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.201836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.201866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.202218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.202247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.202608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.202647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.203002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.203033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.203385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.203415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.203772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.203801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.352 [2024-12-09 12:04:26.204158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.352 [2024-12-09 12:04:26.204186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.352 qpair failed and we were unable to recover it. 00:29:18.353 [2024-12-09 12:04:26.204553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.353 [2024-12-09 12:04:26.204581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.353 qpair failed and we were unable to recover it. 00:29:18.353 [2024-12-09 12:04:26.204928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.353 [2024-12-09 12:04:26.204958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.353 qpair failed and we were unable to recover it. 00:29:18.353 [2024-12-09 12:04:26.205289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.353 [2024-12-09 12:04:26.205319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.353 qpair failed and we were unable to recover it. 00:29:18.353 [2024-12-09 12:04:26.205686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.353 [2024-12-09 12:04:26.205717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.353 qpair failed and we were unable to recover it. 00:29:18.353 [2024-12-09 12:04:26.206079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.353 [2024-12-09 12:04:26.206107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.353 qpair failed and we were unable to recover it. 00:29:18.353 [2024-12-09 12:04:26.206470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.353 [2024-12-09 12:04:26.206498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.353 qpair failed and we were unable to recover it. 00:29:18.353 [2024-12-09 12:04:26.206866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.353 [2024-12-09 12:04:26.206897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.353 qpair failed and we were unable to recover it. 00:29:18.353 [2024-12-09 12:04:26.207144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.353 [2024-12-09 12:04:26.207173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.353 qpair failed and we were unable to recover it. 00:29:18.353 [2024-12-09 12:04:26.207533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.353 [2024-12-09 12:04:26.207568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.353 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.207819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.207854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.208227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.208257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.208621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.208663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.208998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.209026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.209362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.209391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.209749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.209780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.210132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.210162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.210533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.210561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.210913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.210942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.211303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.211332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.211686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.211715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.212091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.212119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.212498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.212526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.212865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.212895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.213247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.213276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.213630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.213672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.213925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.213954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.214295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.214324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.214730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.214761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.215136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.215165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.215538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.215568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.215926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.215956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.216296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.216325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.216684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.216714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.217074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.217105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.217459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.217488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.217840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.217876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.218243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.218273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.218658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.218689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.219076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.219105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.219465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.219494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.219933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.219965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.635 qpair failed and we were unable to recover it. 00:29:18.635 [2024-12-09 12:04:26.220310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.635 [2024-12-09 12:04:26.220339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.220694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.220724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.220980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.221009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.221372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.221401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.221757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.221787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.222137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.222166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.222502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.222531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.222905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.222934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.223288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.223318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.223733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.223763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.224104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.224133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.224502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.224530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.224872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.224902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.225306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.225335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.225686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.225716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.226102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.226130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.226463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.226491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.226818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.226848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.227246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.227274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.227702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.227733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.228076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.228146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.228502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.228537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.228875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.228906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.229235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.229265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.229613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.229650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.229995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.230025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.230261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.230289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.230633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.230674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.231020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.231049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.231407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.231436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.231784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.231815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.232222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.232252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.232603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.232632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.233024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.233054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.233284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.233313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.233694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.233724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.233984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.234013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.234356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.234385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.234744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.234774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.636 qpair failed and we were unable to recover it. 00:29:18.636 [2024-12-09 12:04:26.235140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.636 [2024-12-09 12:04:26.235173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.235518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.235547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.235747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.235777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.236138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.236168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.236527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.236555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.236936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.236975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.237342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.237371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.237619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.237659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.238010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.238040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.238398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.238428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.238777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.238807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.239185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.239215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.239467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.239496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.239861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.239893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.240240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.240271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.240623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.240664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.241029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.241059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.241416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.241445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.241805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.241836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.242186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.242215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.242582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.242611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.242963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.242993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.243345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.243374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.243727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.243758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.244126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.244156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.244517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.244548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.244903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.244933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.245293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.245323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.245692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.245723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.245996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.246025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.246390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.246419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.246763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.246794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.247179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.247209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.247537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.247567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.247926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.247957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.248317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.248346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.248739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.248770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.249220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.249250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.249610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.249649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.250027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.637 [2024-12-09 12:04:26.250056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.637 qpair failed and we were unable to recover it. 00:29:18.637 [2024-12-09 12:04:26.250401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.250431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.250787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.250817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.251167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.251197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.251527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.251556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.251807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.251837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.252200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.252229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.252587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.252616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.253038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.253067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.253429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.253459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.253797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.253827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.254198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.254233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.254583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.254611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.254855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.254884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.255233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.255262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.255617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.255663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.256033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.256062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.256465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.256495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.256756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.256786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.257128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.257157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.257509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.257538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.257872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.257902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.258234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.258263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.258629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.258671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.259005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.259034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.259399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.259427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.259789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.259821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.260202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.260230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.260584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.260613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.260986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.261022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.261375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.261404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.261768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.261799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.262166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.262194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.262564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.262593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.262864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.262893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.263252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.263281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.263651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.263681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.264023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.264051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.264482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.264517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.264754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.264787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.638 qpair failed and we were unable to recover it. 00:29:18.638 [2024-12-09 12:04:26.265146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.638 [2024-12-09 12:04:26.265176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.265534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.265563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.265991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.266023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.266369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.266399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.266758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.266788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.267139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.267167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.267530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.267558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.267906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.267936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.268292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.268321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.268682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.268712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.269002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.269030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.269388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.269417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.269775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.269806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.270171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.270201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.270559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.270589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.270949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.270980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.271318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.271347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.271703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.271734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.272102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.272131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.272338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.272370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.272723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.272754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.273108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.273136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.273490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.273519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.273882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.273913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.274270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.274300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.274668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.274698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.275120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.275150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.275506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.275535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.275900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.275930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.276308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.276337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.276698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.276729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.277145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.277174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.277536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.277565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.277934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.639 [2024-12-09 12:04:26.277964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.639 qpair failed and we were unable to recover it. 00:29:18.639 [2024-12-09 12:04:26.278285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.278314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.278679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.278709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.279086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.279115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.279489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.279518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.279869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.279900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.280270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.280299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.280674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.280705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.281105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.281134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.281497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.281526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.281858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.281897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.282229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.282258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.282613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.282656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.283040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.283070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.283418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.283447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.283782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.283813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.284189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.284218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.284594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.284623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.284935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.284965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.285327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.285356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.285702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.285733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.286160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.286189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.286498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.286527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.286901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.286931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.287284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.287313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.287674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.287704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.288091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.288120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.288473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.288503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.288869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.288899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.289139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.289172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.289537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.289566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.290000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.290030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.290380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.290409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.290768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.290803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.291036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.291068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.291432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.291461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.291813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.291842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.292205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.292234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.292586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.292615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.292986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.293016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.640 qpair failed and we were unable to recover it. 00:29:18.640 [2024-12-09 12:04:26.293392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.640 [2024-12-09 12:04:26.293421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.293789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.293820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.294163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.294192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.294550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.294579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.294943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.294973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.295387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.295415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.295648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.295682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.295971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.296001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.296361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.296391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.296718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.296748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.297098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.297127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.297498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.297527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.297870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.297900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.298256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.298284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.298719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.298748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.299102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.299130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.299508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.299536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.299891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.299920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.300166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.300194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.300494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.300522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.300926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.300962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.301318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.301348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.301710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.301740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.302108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.302137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.302513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.302542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.302957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.302987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.303325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.303355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.303711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.303741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.304104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.304132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.304525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.304555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.304913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.304944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.305286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.305314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.305663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.305693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.306020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.306049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.306414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.306444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.306820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.306850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.307184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.307213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.307571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.307600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.307969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.307999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.308370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.641 [2024-12-09 12:04:26.308400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.641 qpair failed and we were unable to recover it. 00:29:18.641 [2024-12-09 12:04:26.308769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.308801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.309180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.309210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.309570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.309599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.309967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.309997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.310364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.310393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.310811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.310842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.311186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.311215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.311590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.311624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.311912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.311942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.312220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.312249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.312610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.312660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.313003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.313032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.313400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.313428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.313801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.313831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.314268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.314298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.314660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.314691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.315040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.315068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.315454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.315483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.315843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.315874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.316233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.316262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.316665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.316694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.317069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.317101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.317531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.317559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.317899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.317930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.318296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.318325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.318696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.318726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.319094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.319123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.319463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.319492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.319901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.319932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.320323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.320352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.320718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.320748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.321114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.321143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.321486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.321515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.321870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.321899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.322301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.322330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.322685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.322718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.323099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.323128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.323478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.323508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.323897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.323927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.642 [2024-12-09 12:04:26.324331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.642 [2024-12-09 12:04:26.324360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.642 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.324720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.324750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.325023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.325051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.325434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.325463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.325813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.325842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.326174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.326203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.326480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.326509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.326788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.326819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.327192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.327220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.327679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.327711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.328025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.328054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.328420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.328449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.328812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.328843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.329224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.329252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.329612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.329651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.330028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.330056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.330307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.330335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.330686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.330716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.331114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.331143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.331409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.331437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.331682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.331712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.332072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.332102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.332486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.332515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.332856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.332888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.333233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.333263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.333630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.333671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.333921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.333950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.334342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.334371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.334736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.334766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.335143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.335172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.335538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.335568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.335938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.335968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.336195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.336224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.336606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.336647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.337002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.337031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.337263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.337291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.337659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.337696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.338074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.338104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.338479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.338508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.338865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.643 [2024-12-09 12:04:26.338896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.643 qpair failed and we were unable to recover it. 00:29:18.643 [2024-12-09 12:04:26.339258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.339287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.339635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.339679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.339939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.339968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.340379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.340408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.340766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.340797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.341035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.341063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.341420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.341448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.341823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.341853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.342234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.342262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.342663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.342693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.343068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.343097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.343511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.343540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.343891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.343921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.344303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.344332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.344695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.344726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.344961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.344990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.345357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.345387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.345761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.345790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.346164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.346193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.346556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.346584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.346952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.346982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.347345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.347374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.347747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.347777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.348133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.348168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.348513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.348542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.348894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.348926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.349290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.349319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.349754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.349783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.350165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.350193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.350587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.350617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.350976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.351007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.351373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.351403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.351776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.351805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.352163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.352192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.644 [2024-12-09 12:04:26.352558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.644 [2024-12-09 12:04:26.352586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.644 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.352968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.352999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.353327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.353357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.353737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.353768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.354142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.354171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.354437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.354465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.354907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.354937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.355304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.355334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.355705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.355736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.356107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.356136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.356383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.356411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.356654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.356686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.357088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.357117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.357473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.357501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.358193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.358232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.358584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.358619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.358817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.358848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.359230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.359260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.359609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.359649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.360016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.360047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.360421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.360450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.360830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.360860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.361249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.361278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.361662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.361693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.362063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.362092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.362454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.362484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.362862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.362891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.363267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.363296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.363664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.363694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.364053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.364082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.364478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.364509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.364762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.364794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.365158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.365188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.365555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.365584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.365933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.365964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.366384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.366414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.366784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.366814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.367190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.367221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.367561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.367591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.367746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.645 [2024-12-09 12:04:26.367777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.645 qpair failed and we were unable to recover it. 00:29:18.645 [2024-12-09 12:04:26.368178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.368208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.368574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.368603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.368969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.368999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.369378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.369408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.369788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.369818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.370190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.370219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.370567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.370596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.370982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.371013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.371313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.371344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.371723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.371753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.372122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.372152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.372543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.372574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.372920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.372950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.373112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.373140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.373393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.373422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.373799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.373829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.374213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.374242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.374607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.374665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.375059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.375088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.375472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.375500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.375715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.375747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.375996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.376025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.376392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.376421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.376789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.376819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.377196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.377224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.377584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.377613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.377997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.378026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.378397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.378427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.378792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.378823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.379244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.379273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.379501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.379529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.379913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.379945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.380317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.380347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.380703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.380733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.381130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.381159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.381526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.381555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.381978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.382008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.382362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.382391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.382762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.382792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.646 [2024-12-09 12:04:26.383174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.646 [2024-12-09 12:04:26.383203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.646 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.383565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.383595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.384003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.384033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.384387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.384417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.384761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.384791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.385168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.385203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.385491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.385520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.385772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.385803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.386183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.386212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.386523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.386553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.386917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.386950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.387317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.387346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.387739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.387772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.388146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.388175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.388538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.388566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.388922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.388954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.389322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.389350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.389719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.389750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.390197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.390226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.390592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.390621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.390967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.390997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.391403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.391431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.391806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.391836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.392203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.392232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.392600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.392630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.392979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.393008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.393374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.393404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.393773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.393804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.394167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.394196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.394554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.394582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.394944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.394974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.395336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.395365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.395747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.395782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.396112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.396141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.396489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.396520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.396890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.396920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.397276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.397305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.397672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.397703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.398057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.398087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.398435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.398464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.647 qpair failed and we were unable to recover it. 00:29:18.647 [2024-12-09 12:04:26.398721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.647 [2024-12-09 12:04:26.398751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.399141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.399171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.399537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.399566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.399926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.399957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.400326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.400356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.400618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.400655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.401043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.401073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.401475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.401505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.401840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.401871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.402215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.402246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.402499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.402527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.402896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.402926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.403183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.403212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.403567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.403596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.403962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.403992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.404367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.404395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.404759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.404789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.405139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.405169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.405508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.405536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.405803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.405834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.406184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.406214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.406576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.406604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.406986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.407017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.407380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.407410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.407776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.407805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.408167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.408197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.408550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.408579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.408943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.408972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.409340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.409368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.409715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.409746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.410088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.410118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.410517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.410546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.410835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.410865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.411248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.411278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.411653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.411684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.648 [2024-12-09 12:04:26.412047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.648 [2024-12-09 12:04:26.412075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.648 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.412459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.412488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.412841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.412871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.413230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.413259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.413624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.413677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.413954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.413982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.414355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.414384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.414749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.414780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.415132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.415161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.415535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.415564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.415827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.415857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.416205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.416234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.416565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.416596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.416972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.417003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.417365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.417394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.417754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.417784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.418150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.418179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.418567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.418595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.418936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.418966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.419329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.419359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.419724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.419754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.420142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.420171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.420529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.420558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.420928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.420957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.421332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.421361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.421720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.421757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.422097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.422126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.422466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.422495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.422772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.422802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.423199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.423228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.423464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.423493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.423892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.423922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.424284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.424314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.424686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.424716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.424985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.425013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.425359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.425390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.425751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.425781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.426142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.426171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.426532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.426560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.426898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.426929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.649 qpair failed and we were unable to recover it. 00:29:18.649 [2024-12-09 12:04:26.427194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.649 [2024-12-09 12:04:26.427223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.427588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.427618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.427865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.427895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.428264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.428293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.428552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.428580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.428945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.428975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.429338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.429367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.429723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.429754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.430132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.430162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.430414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.430444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.430781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.430811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.431173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.431203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.431558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.431599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.431977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.432007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.432367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.432395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.432760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.432790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.433130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.433160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.433532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.433561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.433922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.433952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.434309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.434338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.434711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.434741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.435109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.435137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.435500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.435528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.435878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.435909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.436257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.436286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.436656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.436687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.437060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.437089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.437449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.437478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.437768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.437799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.438193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.438222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.438583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.438612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.438983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.439013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.439364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.439393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.439754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.439784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.440153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.440181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.440538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.440567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.440940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.440970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.441342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.441370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.441730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.441760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.442118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.650 [2024-12-09 12:04:26.442149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.650 qpair failed and we were unable to recover it. 00:29:18.650 [2024-12-09 12:04:26.442513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.442543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.442909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.442940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.443292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.443321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.443685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.443715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.444080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.444109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.444485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.444514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.444879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.444910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.445254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.445292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.445529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.445558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.445899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.445929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.446275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.446304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.446673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.446703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.447067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.447095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.447461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.447491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.447740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.447769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.448075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.448104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.448459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.448489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.448851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.448881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.449213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.449242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.449599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.449628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.449995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.450025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.450410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.450440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.450845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.450876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.451223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.451253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.451611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.451649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.452024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.452053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.452414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.452444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.452713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.452744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.453014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.453042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.453415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.453443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.453752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.453782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.454163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.454191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.454561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.454591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.455051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.455083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.455440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.455470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.455820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.455851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.456210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.456239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.456494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.456522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.456907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.456937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.651 [2024-12-09 12:04:26.457303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.651 [2024-12-09 12:04:26.457333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.651 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.457700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.457736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.458098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.458127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.458485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.458513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.458875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.458906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.459223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.459253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.459619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.459659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.460000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.460037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.460389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.460419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.460782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.460813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.461079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.461108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.461480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.461509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.461871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.461902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.462263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.462292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.462653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.462684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.463054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.463083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.463480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.463509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.463873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.463904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.464267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.464296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.464667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.464698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.465072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.465100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.465499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.465528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.465873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.465903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.466303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.466332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.466791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.466822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.467191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.467220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.467584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.467614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.467969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.467999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.468358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.468394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.468647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.468678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.469013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.469042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.469416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.469445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.469819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.469849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.470235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.470265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.470621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.470659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.471026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.652 [2024-12-09 12:04:26.471055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.652 qpair failed and we were unable to recover it. 00:29:18.652 [2024-12-09 12:04:26.471428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.471456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.471822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.471852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.472224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.472254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.472613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.472650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.472904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.472933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.473294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.473323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.473682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.473713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.473987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.474015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.474375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.474405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.474773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.474803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.475182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.475210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.475581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.475611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.475981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.476010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.476404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.476434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.476771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.476801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.477112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.477142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.477531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.477560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.477919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.477949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.478315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.478344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.478705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.478742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.479104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.479133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.479476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.479506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.479899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.479929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.480302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.480330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.480704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.480734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.480979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.481008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.481238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.481267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.481666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.481696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.482055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.482085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.482430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.482458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.482814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.482844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.483209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.483237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.483603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.483633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.484007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.484037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.484408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.484437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.484787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.484818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.485188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.485217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.485583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.485613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.485997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.653 [2024-12-09 12:04:26.486027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.653 qpair failed and we were unable to recover it. 00:29:18.653 [2024-12-09 12:04:26.486385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.486414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.486768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.486799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.487169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.487198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.487566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.487594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.487942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.487972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.488342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.488371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.488735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.488764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.489126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.489156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.489513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.489543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.489898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.489928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.490291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.490320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.490706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.490736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.491090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.491118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.491478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.491508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.491884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.491914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.492273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.492302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.492669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.492699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.493060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.493089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.493398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.493427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.493793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.493823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.494183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.494211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.494578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.494608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.494980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.495010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.495368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.495397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.495755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.495786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.496105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.496135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.496485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.496514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.496915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.496945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.497307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.497335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.497671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.497701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.498061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.498090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.498464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.498492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.498825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.498855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.654 [2024-12-09 12:04:26.499225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.654 [2024-12-09 12:04:26.499254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.654 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.499667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.499700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.500094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.500123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.500463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.500492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.500853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.500884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.501216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.501245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.501598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.501627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.502044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.502076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.502455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.502484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.502826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.502856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.503224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.503253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.503618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.503657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.504011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.504040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.504401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.504430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.504789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.504819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.505197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.505232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.505575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.505603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.505995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.506026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.506384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.506413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.506664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.506694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.507075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.507104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.507462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.507491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.507863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.507894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.508258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.508287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.508659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.508689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.509050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.509079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.509443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.509472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.509853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.509883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.510238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.510266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.510630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.510667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.510925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.928 [2024-12-09 12:04:26.510954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.928 qpair failed and we were unable to recover it. 00:29:18.928 [2024-12-09 12:04:26.511308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.511337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.511699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.511731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.512100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.512129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.512497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.512528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.512904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.512935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.513198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.513227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.513558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.513587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.513964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.513993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.514331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.514361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.514715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.514763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.515100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.515130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.515496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.515530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.515887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.515918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.516286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.516315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.516681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.516712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.517075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.517104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.517466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.517495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.517889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.517920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.518280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.518308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.518686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.518717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.519076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.519105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.519463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.519491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.519826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.519856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.520231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.520261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.520627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.520668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.520927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.520956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.521312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.521342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.521717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.521748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.522097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.522127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.522465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.522494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.522854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.522889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.523270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.523300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.523663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.523692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.524054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.524082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.524502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.524531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.524929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.524959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.525303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.525333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.525709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.525740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.526102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.526131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.929 qpair failed and we were unable to recover it. 00:29:18.929 [2024-12-09 12:04:26.526511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.929 [2024-12-09 12:04:26.526540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.526890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.526920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.527276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.527305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.527677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.527706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.528089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.528117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.528475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.528504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.528880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.528910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.529259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.529288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.529663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.529694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.530064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.530094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.530451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.530479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.530745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.530775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.531168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.531197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.531547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.531576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.531939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.531969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.532361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.532390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.532649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.532683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.533042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.533071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.533413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.533451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.533784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.533813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.534167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.534196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.534554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.534582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.535021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.535051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.535412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.535440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.535833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.535863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.536215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.536244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.536611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.536648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.537025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.537054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.537449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.537478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.537715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.537749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.538184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.538213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.538542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.538572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.538948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.538979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.539322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.539352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.539701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.539732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.540099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.540130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.540505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.540533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.540906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.540936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.541296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.541324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.930 [2024-12-09 12:04:26.541681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.930 [2024-12-09 12:04:26.541711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.930 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.542123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.542158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.542505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.542534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.542772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.542802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.543144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.543174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.543540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.543568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.543945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.543976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.544265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.544295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.544565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.544594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.544962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.544992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.545386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.545416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.545774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.545805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.546172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.546200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.546565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.546593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.546953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.546983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.547340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.547369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.547732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.547763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.548140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.548169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.548533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.548561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.548930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.548960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.549330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.549359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.549733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.549764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.550164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.550194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.550523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.550552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.550891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.550921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.551180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.551209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.551660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.551691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.552039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.552068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.552405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.552440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.552782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.552812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.553165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.553195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.553555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.553584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.554025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.554058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.554423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.554451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.554791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.554821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.555184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.555213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.555454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.555486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.555862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.555893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.556257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.556287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.556656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.931 [2024-12-09 12:04:26.556687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.931 qpair failed and we were unable to recover it. 00:29:18.931 [2024-12-09 12:04:26.557057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.557086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.557455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.557484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.557731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.557765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.558156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.558185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.558623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.558689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.559056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.559085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.559442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.559471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.559618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.559663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.560064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.560093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.560450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.560479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.560873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.560914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.561243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.561272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.561635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.561673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.562026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.562062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.562441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.562470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.562844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.562881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.563249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.563278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.563658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.563688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.564092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.564120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.564480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.564509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.564877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.564908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.565265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.565295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.565674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.565704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.566153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.566182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.566552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.566582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.566955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.566985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.567340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.567369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.567731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.567761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.568110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.568140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.568503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.568532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.568881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.568913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.569277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.569307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.569557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.932 [2024-12-09 12:04:26.569587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.932 qpair failed and we were unable to recover it. 00:29:18.932 [2024-12-09 12:04:26.569975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.570005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.570381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.570411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.570664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.570700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.571061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.571091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.571329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.571362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.571708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.571776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.572152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.572180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.572554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.572583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.572964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.572994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.573370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.573399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.573763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.573795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.574152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.574181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.574550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.574580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.574860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.574894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.575262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.575292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.575661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.575690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.576060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.576089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.576342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.576370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.576752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.576784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.577157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.577186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.577430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.577457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.577824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.577854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.578128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.578156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.578569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.578597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.578950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.578980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.579335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.579365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.579728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.579757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.580182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.580211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.580467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.580498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.580891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.580921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.581273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.581303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.581564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.581594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.581979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.582010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.582375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.582405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.582656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.582687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.582938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.582968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.583313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.583343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.583727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.583759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.584162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.584192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.933 [2024-12-09 12:04:26.584549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.933 [2024-12-09 12:04:26.584580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.933 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.584820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.584851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.585216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.585246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.585513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.585544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.585988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.586019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.586381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.586411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.586785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.586817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.587224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.587254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.587611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.587652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.588013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.588043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.588405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.588435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.588680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.588724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.589075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.589106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.589474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.589503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.589859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.589891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.590120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.590151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.590501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.590531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.590776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.590808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.591038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.591068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.591428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.591458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.591817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.591848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.592223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.592253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.592613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.592653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.592859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.592890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.593248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.593278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.593667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.593699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.594053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.594083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.594451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.594482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.594817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.594848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.595218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.595248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.595476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.595505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.595888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.595918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.596288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.596317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.596698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.596728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.596953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.596981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.597337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.597367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.597806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.597836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.598171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.598200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.598443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.598479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.598711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.934 [2024-12-09 12:04:26.598741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.934 qpair failed and we were unable to recover it. 00:29:18.934 [2024-12-09 12:04:26.599121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.599151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.599494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.599525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.599880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.599910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.600265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.600295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.600544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.600574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.600924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.600954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.601302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.601332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.601688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.601718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.602004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.602033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.602438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.602467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.602825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.602857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.603234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.603264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.603668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.603699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.603973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.604001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.604362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.604391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.604755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.604786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.605046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.605075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.605434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.605464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.605728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.605758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.606101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.606130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.606354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.606383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.606784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.606813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.607113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.607141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.607519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.607549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.607928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.607959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.608192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.608221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.608587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.608617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.608979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.609010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.609351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.609380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.609742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.609774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.610157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.610186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.610550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.610579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.610936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.610967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.611279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.611308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.611669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.611701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.612071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.612099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.612365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.612394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.612763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.612794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.613090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.613119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.935 [2024-12-09 12:04:26.613513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.935 [2024-12-09 12:04:26.613543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.935 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.613890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.613921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.614273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.614302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.614667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.614698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.615055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.615084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.615451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.615480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.615868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.615898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.616267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.616296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.616662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.616693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.617063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.617093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.617354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.617383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.617732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.617764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.618145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.618174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.618540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.618568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.618962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.618994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.619355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.619385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.619757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.619787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.620168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.620198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.620433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.620461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.620802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.620833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.621210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.621241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.621662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.621692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.622095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.622124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.622492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.622522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.622896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.622927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.623186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.623215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.623613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.623651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.624020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.624055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.624425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.624453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.624820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.624851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.625234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.625264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.625651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.625681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.626039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.626068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.626421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.626452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.626835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.626866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.627264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.627293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.627674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.627704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.628097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.628126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.628494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.628523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.628905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.936 [2024-12-09 12:04:26.628934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.936 qpair failed and we were unable to recover it. 00:29:18.936 [2024-12-09 12:04:26.629144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.629173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.629536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.629566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.629825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.629856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.630235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.630265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.630538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.630568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.630923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.630954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.631180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.631208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.631488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.631517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.631885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.631915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.632284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.632313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.632700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.632729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.633095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.633123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.633486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.633515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.633875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.633904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.634269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.634304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.634669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.634700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.635061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.635090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.635305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.635334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.635749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.635780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.636158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.636187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.636503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.636533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.636886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.636916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.637278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.637307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.637667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.637697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.638061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.638091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.638384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.638412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.638779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.638809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.639172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.639202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.639556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.639586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.639932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.639962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.640315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.640345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.640700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.640731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.641078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.641107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.641461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.641490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.641873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.937 [2024-12-09 12:04:26.641903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.937 qpair failed and we were unable to recover it. 00:29:18.937 [2024-12-09 12:04:26.642262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.642290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.642656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.642687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.643059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.643088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.643439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.643468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.643826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.643857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.644232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.644261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.644622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.644680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.645053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.645082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.645391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.645419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.645775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.645805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.646165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.646194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.646562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.646591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.646953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.646983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.647344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.647373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.647718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.647749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.648108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.648138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.648501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.648531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.648886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.648916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.649274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.649304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.649679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.649709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.650075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.650105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.650462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.650492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.650825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.650855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.651224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.651253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.651615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.651654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.652029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.652061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.652321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.652350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.652718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.652751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.653149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.653179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.653536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.653565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.653936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.653967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.654301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.654331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.654702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.654735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.655019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.655049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.655399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.655429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.655776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.655806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.656166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.656197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.656557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.656588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.656978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.657010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.938 qpair failed and we were unable to recover it. 00:29:18.938 [2024-12-09 12:04:26.657389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.938 [2024-12-09 12:04:26.657417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.657802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.657834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.658152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.658181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.658562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.658592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.658982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.659014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.659273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.659303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.659658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.659690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.659981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.660010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.660360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.660389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.660761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.660792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.661188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.661217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.661565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.661594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.661957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.661987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.662398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.662430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.662855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.662886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.663261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.663290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.663624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.663679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.664043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.664072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.664426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.664456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.664865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.664895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.665267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.665296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.665662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.665692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.666094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.666123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.666486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.666515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.666901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.666932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.667299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.667326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.667704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.667735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.668166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.668195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.668576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.668604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.668990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.669021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.669382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.669412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.669757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.669787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.670154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.670184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.670414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.670444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.670825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.670856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.671161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.671197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.671558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.671588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.671944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.671975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.672333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.672362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.672715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.939 [2024-12-09 12:04:26.672746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.939 qpair failed and we were unable to recover it. 00:29:18.939 [2024-12-09 12:04:26.673032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.673061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.673449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.673478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.673829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.673861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.674183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.674213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.674570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.674599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.674982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.675013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.675349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.675378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.675740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.675772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.676156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.676186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.676552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.676582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.676984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.677015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.677388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.677419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.677787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.677817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.678233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.678262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.678611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.678652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.679011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.679041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.679398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.679428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.679792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.679824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.680184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.680212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.680577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.680606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.680985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.681016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.681383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.681412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.681770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.681806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.682173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.682203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.682563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.682592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.682979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.683010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.683257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.683286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.683633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.683676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.684008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.684037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.684397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.684427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.684835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.684866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.685213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.685242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.685622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.685663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.686015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.686044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.686413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.686442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.686815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.686845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.687224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.687253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.687504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.687532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.687869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.687899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.940 [2024-12-09 12:04:26.688259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.940 [2024-12-09 12:04:26.688288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.940 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.688635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.688678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.688959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.688989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.689339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.689369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.689718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.689749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.690153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.690183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.690618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.690659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.691025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.691057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.691459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.691489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.691831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.691861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.692224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.692253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.692544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.692573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.692956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.692987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.693259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.693288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.693635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.693677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.694076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.694106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.694466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.694496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.694862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.694893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.695236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.695267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.695649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.695681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.696027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.696057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.696428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.696457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.696828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.696859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.697214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.697244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.697651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.697684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.698062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.698091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.698304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.698333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.698670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.698700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.699045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.699074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.699434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.699463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.699820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.699849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.700204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.700234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.700615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.700665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.701033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.701065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.701430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.701458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.941 qpair failed and we were unable to recover it. 00:29:18.941 [2024-12-09 12:04:26.701820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.941 [2024-12-09 12:04:26.701851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.702204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.702234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.702667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.702699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.703142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.703172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.703603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.703633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.703966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.703995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.704371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.704403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.704744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.704775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.705034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.705064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.705427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.705457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.705890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.705919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.706286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.706317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.706687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.706718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.706983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.707014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.707348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.707378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.707755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.707787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.708160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.708195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.708543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.708574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.708954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.708985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.709347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.709376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.709753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.709783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.710150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.710180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.710544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.710574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.710920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.710952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.711311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.711341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.711725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.711754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.712157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.712185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.712552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.712580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.712926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.712957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.713322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.713352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.713720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.713750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.714158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.714187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.714549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.714579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.715023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.715054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.715441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.715471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.715694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.715725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.715984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.716012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.716394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.716424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.716666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.716696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.717095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.942 [2024-12-09 12:04:26.717124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.942 qpair failed and we were unable to recover it. 00:29:18.942 [2024-12-09 12:04:26.717342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.717370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.717743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.717773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.718155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.718184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.718534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.718570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.718915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.718945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.719314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.719343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.719680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.719711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.720128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.720159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.720587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.720617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.721059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.721090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.721462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.721490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.721834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.721865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.722223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.722252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.722616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.722655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.723028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.723058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.723434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.723463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.723812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.723842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.724217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.724247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.724691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.724722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.725118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.725147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.725539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.725569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.725936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.725967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.726321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.726351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.726709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.726739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.727100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.727131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.727474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.727506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.727913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.727944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.728310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.728341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.728724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.728754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.729105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.729134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.729508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.729543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.729969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.730000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.730337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.730367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.730756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.730786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.731172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.731201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.731561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.731590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.731933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.731964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.732328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.732360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.732739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.732770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.943 qpair failed and we were unable to recover it. 00:29:18.943 [2024-12-09 12:04:26.733171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.943 [2024-12-09 12:04:26.733202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.733567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.733597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.733964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.733995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.734359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.734390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.734823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.734852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.735100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.735129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.735511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.735541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.735884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.735914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.736260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.736290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.736659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.736692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.737052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.737081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.737442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.737472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.737725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.737757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.738143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.738172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.738540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.738569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.738945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.738975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.739311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.739340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.739745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.739780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.740125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.740164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.740528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.740558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.740903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.740935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.741298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.741330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.741688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.741720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.741988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.742021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.742393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.742422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.742787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.742818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.743082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.743110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.743439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.743470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.743720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.743750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.744117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.744146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.744516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.744549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.744881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.744911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.745272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.745315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.745712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.745743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.746093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.746123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.746483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.746513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.746896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.746926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.747182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.747210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.747552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.747581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.944 [2024-12-09 12:04:26.747855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.944 [2024-12-09 12:04:26.747888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.944 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.748143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.748172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.748419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.748449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.748832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.748863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.749228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.749258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.749623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.749666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.749932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.749961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.750318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.750347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.750598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.750626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.751024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.751056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.751418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.751446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.751811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.751842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.752207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.752237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.752610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.752663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.753012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.753042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.753262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.753292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.753530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.753560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.753916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.753946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.754195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.754224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.754578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.754608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.754961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.754999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.755220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.755251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.755607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.755649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.756041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.756072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.756421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.756451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.756844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.756876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.757241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.757272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.757511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.757540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.757907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.757937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.758310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.758340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.758697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.758728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.759106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.759135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.759389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.759421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.759797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.759828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.760205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.760234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.760597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.760626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.761033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.761064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.761319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.761347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.945 [2024-12-09 12:04:26.761696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.945 [2024-12-09 12:04:26.761736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.945 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.762179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.762209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.762545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.762573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.762933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.762964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.763302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.763333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.763674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.763704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.763948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.763977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.764355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.764385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.764759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.764789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.765169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.765204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.765556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.765586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.765924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.765955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.766348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.766378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.766616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.766662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.767080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.767109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.767473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.767502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.767875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.767906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.768159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.768187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.768569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.768597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.768940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.768972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.769206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.769239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.769601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.769631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.770055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.770084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.770359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.770389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.770765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.770795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.771158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.771187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.771534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.771563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.771885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.771916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.772302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.772332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.772671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.772703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.772964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.772992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.773369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.773398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.773744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.773774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.774223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.946 [2024-12-09 12:04:26.774252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.946 qpair failed and we were unable to recover it. 00:29:18.946 [2024-12-09 12:04:26.774580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.774611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.774914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.774944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.775344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.775373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.775722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.775754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.776120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.776150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.776514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.776543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.776917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.776947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.777291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.777320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.777694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.777725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.778087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.778115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.778486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.778515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.778934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.778964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.779213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.779242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.779670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.779700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.779985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.780014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.780364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.780393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.780778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.780809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.781177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.781206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.781442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.781471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.781716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.781746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.782133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.782163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.782527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.782557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.782912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.782942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.783234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.783263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.783649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.783681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.784060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.784089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.784449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.784478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.784829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.784859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.785238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.785268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.785628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.785670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.786059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.786089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.786443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.786473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.786830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.786860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.787226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.787255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.787617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.787657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.788015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.788044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.788406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.788435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.788851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.947 [2024-12-09 12:04:26.788883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.947 qpair failed and we were unable to recover it. 00:29:18.947 [2024-12-09 12:04:26.789221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.789250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.789582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.789611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.789919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.789949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.790283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.790312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.790677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.790710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.791101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.791136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.791510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.791540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.791906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.791935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.792314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.792344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.792695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.792727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.793121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.793150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.793506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.793535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.793881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.793912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.794265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.794294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.794660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.794692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.795053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.795081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.795449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.795478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.795827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.795857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.796230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.796259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.796619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.796669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.797013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.797043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.797371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.797401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.797804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.797835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.798192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.798220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.798582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.798611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.798904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.798934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.799292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.799321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.799689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.799722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.800101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.800130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:18.948 [2024-12-09 12:04:26.800492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.948 [2024-12-09 12:04:26.800521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:18.948 qpair failed and we were unable to recover it. 00:29:19.222 [2024-12-09 12:04:26.800900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-12-09 12:04:26.800934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.222 qpair failed and we were unable to recover it. 00:29:19.222 [2024-12-09 12:04:26.801310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-12-09 12:04:26.801339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.222 qpair failed and we were unable to recover it. 00:29:19.222 [2024-12-09 12:04:26.801588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-12-09 12:04:26.801624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.222 qpair failed and we were unable to recover it. 00:29:19.222 [2024-12-09 12:04:26.801991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-12-09 12:04:26.802021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.222 qpair failed and we were unable to recover it. 00:29:19.222 [2024-12-09 12:04:26.802387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-12-09 12:04:26.802416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.222 qpair failed and we were unable to recover it. 00:29:19.222 [2024-12-09 12:04:26.802784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-12-09 12:04:26.802815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.222 qpair failed and we were unable to recover it. 00:29:19.222 [2024-12-09 12:04:26.803154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-12-09 12:04:26.803184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.222 qpair failed and we were unable to recover it. 00:29:19.222 [2024-12-09 12:04:26.803542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-12-09 12:04:26.803571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.222 qpair failed and we were unable to recover it. 00:29:19.222 [2024-12-09 12:04:26.803873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-12-09 12:04:26.803902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.222 qpair failed and we were unable to recover it. 00:29:19.222 [2024-12-09 12:04:26.804296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.804325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.804700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.804730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.805089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.805117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.805454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.805483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.805852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.805883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.806232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.806260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.806629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.806671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.807050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.807081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.807443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.807471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.807815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.807846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.808210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.808239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.808612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.808651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.808897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.808927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.809275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.809305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.809675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.809706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.810072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.810101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.810423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.810452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.810793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.810823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.811192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.811222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.811596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.811624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.812024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.812060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.812402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.812432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.812676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.812706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.813079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.813108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.813484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.813512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.813883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.813914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.814250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.814280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.814656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.814686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.815067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.815097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.815459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.815488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.815848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.815879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.816222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.816253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.816614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.816658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.817025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.817054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.817422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.817451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.817786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.817817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.818161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.818189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.818550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.818580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.818927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-12-09 12:04:26.818957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-12-09 12:04:26.819334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.819362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.819726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.819755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.820123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.820151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.820533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.820564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.820945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.820975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.821340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.821368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.821708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.821739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.822098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.822127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.822493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.822522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.822871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.822903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.823260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.823291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.823670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.823700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.824101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.824131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.824501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.824530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.824866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.824897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.825304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.825334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.825716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.825750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.826108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.826138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.826506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.826536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.826852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.826883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.827250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.827279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.827658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.827691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.828042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.828080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.828439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.828469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.828759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.828789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.829195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.829225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.829617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.829662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.830054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.830085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.830440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.830470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.830826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.830857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.831249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.831278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.831633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.831688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.831984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.832014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.832369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.832399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.832661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.832693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.833047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.833078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.833445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.833476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.833830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.833861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.834227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.834259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-12-09 12:04:26.834622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.224 [2024-12-09 12:04:26.834671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.835074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.835104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.835462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.835492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.835874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.835906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.836275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.836306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.836673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.836704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.837084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.837115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.837391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.837422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.837793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.837824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.838167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.838199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.838541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.838577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.838941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.838973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.839418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.839448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.839830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.839862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.840247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.840276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.840656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.840688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.840955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.840984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.841363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.841393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.841772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.841801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.842191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.842220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.842386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.842415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.842906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.842939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.843301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.843331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.843674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.843705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.843986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.844015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.844384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.844413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.844786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.844817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.845211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.845239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.845512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.845541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.845912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.845941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.846310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.846340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.846710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.846742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.847110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.847139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.847484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.847513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.847910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.847941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.848324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.848354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.848626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.848671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.849046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.849082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.849441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.849472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.849827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.225 [2024-12-09 12:04:26.849858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-12-09 12:04:26.850225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.850254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.850584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.850613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.851060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.851089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.851458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.851488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.851759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.851789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.852038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.852067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.852443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.852472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.852845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.852875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.853243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.853274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.853664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.853695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.853944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.853972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.854368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.854397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.854788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.854818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.855206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.855234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.855597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.855626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.856033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.856062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.856429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.856459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.856858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.856889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.857254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.857283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.857523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.857553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.857809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.857839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.858216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.858247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.858632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.858676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.858924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.858953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.859334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.859362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.859715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.859746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.859900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.859928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.860312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.860343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.860572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.860600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.861003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.861035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.861383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.861414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.861787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.861819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.862173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.862204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.862591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.862620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.862858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.862888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.863131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.863160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.863541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.863572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.863744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.863779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.864194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.864225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.226 [2024-12-09 12:04:26.864593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.226 [2024-12-09 12:04:26.864622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.226 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.864759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.864789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.865136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.865165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.865537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.865565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.865693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.865726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.866003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.866033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.866405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.866434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.866828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.866861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.867070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.867098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.867479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.867509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.867888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.867919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.868324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.868354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.868601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.868631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.869019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.869051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.869420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.869451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.869805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.869836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.870292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.870320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.870682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.870713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.871114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.871145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.871374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.871402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.871803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.871834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.872185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.872215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.872599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.872628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.872918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.872951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.873216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.873246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.873588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.873618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.873801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.873838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.874318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.874347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.874724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.874755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.875139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.875169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.875545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.875575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.876050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.876080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.876465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.876494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.876761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.227 [2024-12-09 12:04:26.876791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.227 qpair failed and we were unable to recover it. 00:29:19.227 [2024-12-09 12:04:26.877034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.877064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.877450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.877482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.877833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.877863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.878289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.878318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.878706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.878738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.879141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.879170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.879414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.879447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.879711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.879744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.879972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.880002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.880236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.880265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.880521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.880553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.880923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.880953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.881336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.881366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.881745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.881776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.882045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.882073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.882443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.882472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.882872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.882903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.883206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.883235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.883602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.883631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.884035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.884074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.884445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.884474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.884774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.884803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.885180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.885209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.885585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.885615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.885995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.886024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.886248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.886278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.886715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.886746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.886993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.887025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.887379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.887408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.887654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.887688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.888133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.888163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.888510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.888540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.888881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.888911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.889274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.889303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.889679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.889709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.890100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.890131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.890368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.890400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.890767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.890798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.891181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.891210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.228 qpair failed and we were unable to recover it. 00:29:19.228 [2024-12-09 12:04:26.891578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.228 [2024-12-09 12:04:26.891607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.891995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.892025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.892267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.892299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.892660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.892692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.893092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.893122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.893491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.893519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.893869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.893900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.894263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.894298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.894549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.894579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.894826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.894860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.895222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.895250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.895601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.895630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.895988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.896018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.896377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.896407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.896728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.896760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.897162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.897192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.897556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.897584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.897988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.898018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.898371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.898400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.898766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.898797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.899158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.899188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.899559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.899588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.899984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.900014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.900371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.900399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.900845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.900877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.901244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.901274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.901636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.901675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.902052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.902082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.902435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.902464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.902863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.902893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.903146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.903175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.903561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.903593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.903840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.903874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.904242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.904272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.904521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.904551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.904971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.905004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.905344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.905375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.905738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.905769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.906128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.906158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.906406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.229 [2024-12-09 12:04:26.906435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.229 qpair failed and we were unable to recover it. 00:29:19.229 [2024-12-09 12:04:26.906818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.906848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.907242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.907270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.907613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.907654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.908040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.908069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.908417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.908446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.908799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.908831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.909244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.909273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.909623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.909663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.910003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.910037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.910299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.910329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.910715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.910745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.911102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.911132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.911372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.911401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.911848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.911877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.912238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.912268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.912652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.912685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.913132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.913162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.913527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.913555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.913920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.913950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.914291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.914321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.914697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.914729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.915094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.915125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.915490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.915519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.915877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.915909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.916249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.916277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.916556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.916584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.916970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.917001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.917275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.917304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.917673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.917704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.918113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.918141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.918492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.918522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.918890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.918921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.919206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.919235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.919598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.919627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.919875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.919905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.920262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.920297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.920667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.920698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.921071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.921099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.921464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.921493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.230 [2024-12-09 12:04:26.921754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.230 [2024-12-09 12:04:26.921784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.230 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.922185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.922213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.922578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.922607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.922975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.923005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.923345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.923374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.923747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.923780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.924164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.924193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.924553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.924582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.925022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.925052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.925479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.925507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.925765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.925796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.926084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.926113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.926464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.926493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.926781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.926811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.927190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.927220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.927569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.927598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.927978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.928008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.928269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.928298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.928688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.928719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.929150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.929179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.929423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.929451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.929813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.929845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.930213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.930241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.930586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.930622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.931022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.931053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.931391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.931421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.931766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.931796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.931988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.932017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.932381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.932410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.932770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.932804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.933169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.933198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.933545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.933574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.933934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.933964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.934293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.934323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.934698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.934730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.231 qpair failed and we were unable to recover it. 00:29:19.231 [2024-12-09 12:04:26.935141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.231 [2024-12-09 12:04:26.935170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.935526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.935555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.935925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.935956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.936325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.936355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.936636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.936680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.937131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.937161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.937523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.937553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.937894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.937925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.938359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.938389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.938746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.938777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.939116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.939145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.939500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.939532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.939900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.939930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.940278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.940315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.940655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.940685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.941046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.941075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.941421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.941451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.941790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.941821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.942170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.942198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.942561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.942590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.942957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.942986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.943404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.943432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.943789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.943821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.944191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.944221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.944581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.944611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.944963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.944993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.945391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.945421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.945759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.945790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.946033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.946062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.946424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.946454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.946797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.946829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.947194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.947223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.947592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.947620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.947898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.947927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.948274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.948304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.948674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.948705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.949069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.949099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.949467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.949496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.949872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.949902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.232 [2024-12-09 12:04:26.950272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.232 [2024-12-09 12:04:26.950301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.232 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.950661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.950693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.950929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.950961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.951325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.951354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.951580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.951608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.951949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.951980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.952336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.952365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.952726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.952756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.953139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.953169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.953531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.953561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.953905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.953935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.954312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.954342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.954751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.954781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.955225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.955254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.955614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.955651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.956010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.956039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.956405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.956433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.956788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.956825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.957182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.957211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.957576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.957606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.957998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.958029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.958272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.958299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.958659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.958689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.958932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.958961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.959319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.959348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.959748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.959778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.960154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.960185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.960530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.960558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.960815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.960844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.961196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.961247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.961542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.961572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.961983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.962016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.962407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.962436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.962813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.962843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.963249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.963277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.963626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.963670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.964027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.964057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.964435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.964464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.964820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.964850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.965212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.233 [2024-12-09 12:04:26.965241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.233 qpair failed and we were unable to recover it. 00:29:19.233 [2024-12-09 12:04:26.965605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.965634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.966080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.966109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.966470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.966500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.966738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.966773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.967161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.967197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.967631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.967671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.968044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.968074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.968434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.968464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.968851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.968882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.969241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.969270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.969627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.969670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.970112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.970142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.970516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.970547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.970938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.970969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.971359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.971388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.971625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.971670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.972064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.972093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.972341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.972369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.972682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.972713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.973067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.973097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.973454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.973482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.973859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.973889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.974279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.974308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.974662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.974693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.975063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.975092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.975459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.975489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.975829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.975859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.976147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.976176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.976407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.976436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.976752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.976783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.977228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.977257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.977582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.977613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.978036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.978066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.978413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.978444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.978664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.978697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.979075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.979104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.979491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.979521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.979755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.979788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.980154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.980184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.234 qpair failed and we were unable to recover it. 00:29:19.234 [2024-12-09 12:04:26.980471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.234 [2024-12-09 12:04:26.980500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.980869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.980900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.981141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.981170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.981508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.981537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.981903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.981933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.982294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.982324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.982764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.982796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.983158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.983186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.983562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.983591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.983967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.983998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.984346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.984376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.984718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.984749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.984983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.985016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.985390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.985420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.985685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.985715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.986121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.986151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.986549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.986579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.986961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.986991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.987247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.987276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.987542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.987583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.987968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.987999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.988344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.988372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.988747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.988780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.989134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.989163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.989542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.989570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.989962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.989992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.990233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.990262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.990630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.990670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.991038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.991066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.991406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.991436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.991681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.991714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.992099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.992129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.992473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.992504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.992867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.992905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.993259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.993290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.993690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.993722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.994090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.994119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.994464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.994493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.994756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.994786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.995139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.235 [2024-12-09 12:04:26.995168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.235 qpair failed and we were unable to recover it. 00:29:19.235 [2024-12-09 12:04:26.995536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:26.995566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:26.995906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:26.995936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:26.996295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:26.996324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:26.996556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:26.996584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:26.996956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:26.996985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:26.997418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:26.997447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:26.997791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:26.997824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:26.998061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:26.998089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:26.998429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:26.998458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:26.998833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:26.998863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:26.999232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:26.999260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:26.999616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:26.999656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.000092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:27.000123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.000484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:27.000515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.000871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:27.000902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.001279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:27.001308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.001744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:27.001775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.002150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:27.002179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.002537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:27.002567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.002956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:27.002987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.003245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:27.003280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.003651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:27.003682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.004048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:27.004076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.004445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:27.004476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.004835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:27.004865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.005135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:27.005164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.005514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:27.005543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.005891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:27.005922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.006196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:27.006225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.006598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:27.006629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.006891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:27.006921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.007275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:27.007304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.007656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:27.007685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.008102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:27.008131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.008487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.236 [2024-12-09 12:04:27.008516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.236 qpair failed and we were unable to recover it. 00:29:19.236 [2024-12-09 12:04:27.008869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.008899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.009250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.009280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.009652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.009683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.010049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.010087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.010449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.010479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.010855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.010886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.011248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.011278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.011609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.011648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.012020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.012050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.012405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.012433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.012829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.012859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.013211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.013239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.013590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.013626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.014017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.014047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.014414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.014443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.014800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.014830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.015125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.015153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.015515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.015546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.015877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.015907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.016259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.016290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.016618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.016658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.016999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.017029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.017270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.017299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.017673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.017703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.018071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.018100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.018461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.018490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.018868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.018899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.019233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.019263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.019614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.019655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.020059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.020087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.020452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.020483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.020850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.020880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.021236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.021265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.021628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.021669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.022094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.022123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.022489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.022519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.022872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.022903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.023274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.023302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.023669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.237 [2024-12-09 12:04:27.023699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.237 qpair failed and we were unable to recover it. 00:29:19.237 [2024-12-09 12:04:27.024063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.024093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.024352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.024382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.024717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.024748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.025126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.025155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.025524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.025553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.025909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.025939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.026298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.026327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.026729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.026759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.027101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.027131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.027469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.027498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.027942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.027972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.028334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.028362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.028725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.028754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.029111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.029139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.029500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.029530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.029886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.029917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.030286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.030315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.030608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.030636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.031027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.031056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.031416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.031446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.031808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.031839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.032206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.032235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.032669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.032699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.033065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.033093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.033453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.033482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.033870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.033901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.034244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.034274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.034609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.034649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.035111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.035146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.035501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.035531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.035930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.035962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.036321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.036352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.036712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.036742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.037105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.037135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.037502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.037531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.037878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.037908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.038280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.038309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.038697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.038729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.039110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.039140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.238 [2024-12-09 12:04:27.039502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.238 [2024-12-09 12:04:27.039530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.238 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.039786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.039817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.040232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.040267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.040633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.040676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.041071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.041100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.041444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.041473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.041837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.041867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.042230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.042258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.042615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.042659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.043059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.043088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.043400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.043430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.043802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.043832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.044181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.044209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.044570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.044598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.044961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.044990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.045352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.045382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.045837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.045868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.046229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.046257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.046666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.046696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.047067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.047097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.047446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.047476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.047842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.047873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.048140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.048168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.048542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.048570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.048957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.048987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.049340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.049369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.049620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.049664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.050064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.050093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.050454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.050486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.050853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.050890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.051254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.051283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.051654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.051686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.052062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.052092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.052455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.052484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.052819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.052850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.053102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.053130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.053516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.053545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.239 [2024-12-09 12:04:27.053833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.239 [2024-12-09 12:04:27.053862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.239 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.054231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.054261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.054620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.054662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.055073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.055103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.055420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.055449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.055702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.055733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.056108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.056138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.056519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.056549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.056907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.056939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.057275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.057305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.057660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.057690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.057991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.058020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.058382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.058412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.058758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.058790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.059156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.059185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.059587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.059617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.060036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.060067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.060434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.060464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.060829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.060860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.061270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.061299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.061659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.061688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.062046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.062075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.062425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.062453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.062711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.062740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.063031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.063061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.063426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.063454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.063818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.063848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.064186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.064214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.064584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.064615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.064992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.065022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.065364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.065395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.240 [2024-12-09 12:04:27.065808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.240 [2024-12-09 12:04:27.065839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.240 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.066200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.066229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.066486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.066516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.066903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.066934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.067293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.067322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.067774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.067804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.068166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.068196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.068564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.068593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.068977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.069009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.069254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.069283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.069670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.069704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.070110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.070141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.070392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.070421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.070796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.070827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.071193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.071223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.071572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.071601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.071999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.072031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.072382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.072412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.072758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.072790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.073142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.073171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.073513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.073541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.073902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.073934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.074303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.074334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.074752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.074784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.075175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.075204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.075618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.075658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.075913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.075941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.076341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.076371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.076746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.076778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.077049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.077084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.077476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.077505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.077868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.077899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.078265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.078294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.078664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.078695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.079059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.079088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.079478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.079507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.079914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.079944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.080302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.080331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.080681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.080711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.241 [2024-12-09 12:04:27.081055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.241 [2024-12-09 12:04:27.081084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.241 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.081424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.081455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.081726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.081755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.082093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.082122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.082442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.082473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.082840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.082870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.083227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.083257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.083615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.083655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.084016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.084044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.084403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.084432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.084792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.084822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.085075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.085104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.085360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.085392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.085654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.085685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.086095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.086125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.086510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.086538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.086884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.086915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.087274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.087310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.087668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.087698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.088063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.088091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.088473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.088502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.088791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.088821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.089199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.089229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.089603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.089634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.089975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.090006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.090375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.090405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.090766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.090796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.091162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.091191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.091558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.091587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.091965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.091996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.092403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.092432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.092723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.092753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.093121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.093150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.093404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.093433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.093815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.093845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.242 qpair failed and we were unable to recover it. 00:29:19.242 [2024-12-09 12:04:27.094202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.242 [2024-12-09 12:04:27.094242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.243 qpair failed and we were unable to recover it. 00:29:19.243 [2024-12-09 12:04:27.094673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.243 [2024-12-09 12:04:27.094704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.243 qpair failed and we were unable to recover it. 00:29:19.243 [2024-12-09 12:04:27.095060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.095091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.095369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.095402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.095752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.095783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.096164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.096196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.096509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.096539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.096891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.096921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.097303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.097334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.097702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.097739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.098145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.098174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.098551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.098581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.098947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.098978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.099383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.099411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.099768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.099799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.100250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.100279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.100519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.100549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.100984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.101015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.101422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.101450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.101818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.101848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.102211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.102240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.102607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.102648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.102997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.103026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.103403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.103433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.103803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.103835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.104207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.104236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.104614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.104655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.104884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.104914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.105263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.105292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.105635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.105676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.106024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.106055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.106401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.106431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.106661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.106691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.107041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.107070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.107442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.107470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.517 [2024-12-09 12:04:27.107845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.517 [2024-12-09 12:04:27.107876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.517 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.108137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.108166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.108390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.108424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.108796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.108826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.109190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.109220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.109598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.109627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.110018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.110050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.110432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.110460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.110696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.110726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.111076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.111105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.111394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.111422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.111801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.111832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.112027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.112056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.112303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.112333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.112599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.112629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.113041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.113071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.113434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.113464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.113817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.113848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.114220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.114249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.114663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.114693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.115059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.115088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.115458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.115488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.115922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.115953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.116174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.116203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.116535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.116565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.116940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.116970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.117375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.117405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.117779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.117809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.118172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.118202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.118607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.118648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.119003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.119034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.119373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.119402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.119752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.119783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.120149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.120178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.120549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.120578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.120927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.120957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.121322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.121352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.121735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.121765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.122005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.122033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.122276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.518 [2024-12-09 12:04:27.122306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.518 qpair failed and we were unable to recover it. 00:29:19.518 [2024-12-09 12:04:27.122538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.122568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.122849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.122879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.123145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.123180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.123429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.123458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.123845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.123876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.124269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.124298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.124542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.124572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.124945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.124975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.125338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.125368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.125625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.125681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.125947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.125976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.126353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.126382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.126760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.126789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.127156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.127186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.127559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.127593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.127976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.128006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.128251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.128280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.128621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.128662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.129006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.129035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.129400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.129429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.129786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.129818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.130076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.130104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.130479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.130509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.130858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.130889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.131263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.131292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.131660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.131691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.132076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.132105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.132490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.132519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.132853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.132882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.133327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.133369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.133737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.133768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.134159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.134189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.134433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.134462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.134810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.134841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.135191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.135219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.135594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.135622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.135988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.136017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.136363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.136392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.136757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.136788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.519 [2024-12-09 12:04:27.137167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.519 [2024-12-09 12:04:27.137196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.519 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.137507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.137537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.137909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.137940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.138305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.138334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.138710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.138756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.139102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.139133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.139415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.139444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.139799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.139829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.140188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.140217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.140578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.140606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.140949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.140978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.141331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.141360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.141761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.141792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.142133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.142161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.142539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.142567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.142936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.142966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.143336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.143366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.143711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.143741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.144119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.144149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.144511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.144539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.144883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.144912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.145273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.145301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.145662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.145694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.146076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.146106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.146460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.146489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.146877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.146906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.147212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.147241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.147607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.147636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.148010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.148040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.148402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.148433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.148612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.148656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.149061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.149091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.149468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.149496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.149776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.149807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.150208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.150236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.150597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.150626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.150998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.151029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.151372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.151401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.151778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.151808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.152074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.520 [2024-12-09 12:04:27.152103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.520 qpair failed and we were unable to recover it. 00:29:19.520 [2024-12-09 12:04:27.152513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.152543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.152880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.152912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.153328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.153357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.153703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.153733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.154134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.154164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.154522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.154551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.154891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.154921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.155291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.155322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.155708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.155738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.156089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.156118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.156450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.156480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.156818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.156848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.157199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.157229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.157583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.157614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.157992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.158023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.158388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.158418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.158676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.158709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.158969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.158998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.159351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.159386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.159738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.159770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.160134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.160163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.160594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.160623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.161053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.161082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.161439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.161468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.161815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.161845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.162288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.162318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.162774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.162808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.163172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.163201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.163570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.163599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.163862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.163892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.164290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.164321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.164668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.164701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.165082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.521 [2024-12-09 12:04:27.165112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.521 qpair failed and we were unable to recover it. 00:29:19.521 [2024-12-09 12:04:27.165516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.165545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.165912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.165944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.166292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.166321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.166620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.166680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.167044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.167075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.167331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.167360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.167717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.167748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.168123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.168154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.168520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.168550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.168883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.168915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.169157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.169191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.169549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.169578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.169961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.170006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.170363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.170393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.170622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.170677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.171053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.171084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.171447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.171477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.171901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.171932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.172281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.172310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.172673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.172705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.173099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.173129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.173494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.173525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.173871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.173902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.174259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.174290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.174659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.174690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.175029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.175059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.175436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.175468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.175834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.175864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.176306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.176336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.176693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.176725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.177094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.177123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.177373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.177402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.177622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.177667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.178075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.178105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.178463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.178493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.178831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.178862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.179198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.179227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.179573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.179604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.179951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.179981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.180350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.522 [2024-12-09 12:04:27.180386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.522 qpair failed and we were unable to recover it. 00:29:19.522 [2024-12-09 12:04:27.180743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.180774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.181059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.181093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.181479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.181509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.181864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.181894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.182237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.182266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.182570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.182600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.182999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.183030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.183387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.183417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.183781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.183813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.184046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.184076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.184442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.184471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.184711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.184746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.185183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.185213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.185653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.185685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.186036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.186065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.186434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.186463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.186836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.186868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.187233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.187263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.187626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.187669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.188000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.188030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.188362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.188391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.188747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.188779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.189132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.189164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.189530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.189560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.189912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.189944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.190311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.190342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.190704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.190742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.191169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.191200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.191559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.191589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.191981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.192013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.192370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.192400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.192806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.192836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.193197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.193226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.193535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.193565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.193931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.193963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.194318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.194348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.194741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.194772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.195170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.195199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.195554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.523 [2024-12-09 12:04:27.195583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.523 qpair failed and we were unable to recover it. 00:29:19.523 [2024-12-09 12:04:27.195956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.195988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.196239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.196268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.196620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.196666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.197036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.197065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.197447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.197475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.197832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.197863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.198108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.198137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.198474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.198503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.198905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.198935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.199299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.199327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.199756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.199786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.200138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.200167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.200313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.200341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.200726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.200756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.201112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.201142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.201508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.201537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.201885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.201916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.202260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.202290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.202616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.202655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.203034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.203062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.203310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.203343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.203610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.203650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.204059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.204089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.204430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.204460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.204833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.204863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.205231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.205260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.205627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.205670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.206022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.206051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.206311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.206346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.206702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.206733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.207101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.207129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.207479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.207508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.207902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.207932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.208176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.208208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.208434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.208464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.208813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.208843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.209210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.209238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.209599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.209628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.210050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.210079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.524 [2024-12-09 12:04:27.210435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.524 [2024-12-09 12:04:27.210464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.524 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.210809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.210840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.211210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.211238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.211520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.211549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.211902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.211933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.212305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.212335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.212712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.212742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.213107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.213137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.213499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.213528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.213870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.213899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.214263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.214292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.214634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.214680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.215009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.215038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.215397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.215425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.215788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.215819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.216081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.216110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.216453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.216490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.216848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.216879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.217247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.217276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.217655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.217686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.217981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.218010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.218360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.218388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.218765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.218795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.219164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.219192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.219556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.219585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.219952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.219982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.220344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.220374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.220722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.220752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.221101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.221131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.221503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.221533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.221872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.221910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.222248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.222278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.222659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.222690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.223064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.223093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.223406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.223436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.223794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.223826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.224175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.224205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.224570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.224598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.224960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.224990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.525 qpair failed and we were unable to recover it. 00:29:19.525 [2024-12-09 12:04:27.225361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.525 [2024-12-09 12:04:27.225390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.225772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.225804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.226206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.226235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.226603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.226632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.227056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.227086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.227461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.227490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.227859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.227889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.228235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.228264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.228622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.228664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.229013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.229041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.229425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.229454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.229797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.229827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.230165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.230194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.230564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.230595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.230953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.230983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.231341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.231371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.231722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.231751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.232201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.232231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.232625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.232667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.232968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.232996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.233352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.233381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.233738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.233768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.234145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.234174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.234544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.234573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.234928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.234958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.235323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.235352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.235702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.235732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.236090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.236119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.236476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.236504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.236881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.236910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.237276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.237306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.237672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.237704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.237989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.238018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.238382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.238412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.238766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.238796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.239164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.526 [2024-12-09 12:04:27.239194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.526 qpair failed and we were unable to recover it. 00:29:19.526 [2024-12-09 12:04:27.239507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.239538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.239821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.239851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.240216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.240245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.240615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.240657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.241016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.241044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.241405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.241435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.241806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.241837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.242194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.242223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.242581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.242610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.242995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.243030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.243404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.243433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.243799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.243829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.244199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.244228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.244592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.244621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.244999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.245027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.245382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.245411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.245777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.245807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.246170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.246199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.246566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.246595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.246990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.247021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.247380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.247408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.247761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.247793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.248161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.248190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.248554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.248583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.248873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.248904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.249265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.249296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.249667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.249698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.250061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.250090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.250462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.250491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.250881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.250913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.251283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.251312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.251659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.251692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.252041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.252070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.252438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.252466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.252854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.252884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.253219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.253249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.253497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.253532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.253886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.253916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.254264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.254293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.527 qpair failed and we were unable to recover it. 00:29:19.527 [2024-12-09 12:04:27.254661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.527 [2024-12-09 12:04:27.254690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.255045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.255074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.255438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.255466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.255849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.255878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.257972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.258043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.258474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.258511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.258889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.258923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.259282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.259311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.259684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.259714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.260082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.260112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.260476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.260507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.260868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.260906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.261180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.261211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.261544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.261574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.261941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.261971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.262349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.262379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.262719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.262750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.263107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.263137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.263476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.263506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.263871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.263902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.264265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.264295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.264660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.264691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.265044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.265074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.265435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.265465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.265841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.265878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.266232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.266264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.266629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.266696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.267067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.267098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.267467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.267496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.267868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.267902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.268150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.268181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.268520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.268550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.268897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.268929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.269288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.269318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.269682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.269714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.270103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.270132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.270473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.270502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.270859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.270890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.271131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.271160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.271533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.528 [2024-12-09 12:04:27.271564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.528 qpair failed and we were unable to recover it. 00:29:19.528 [2024-12-09 12:04:27.271958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.271990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.272345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.272374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.272717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.272748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.273121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.273151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.273547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.273576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.273947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.273979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.274347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.274376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.274738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.274769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.275137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.275166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.275519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.275550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.275904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.275934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.276301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.276331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.276698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.276729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.277101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.277132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.277467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.277497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.277849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.277881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.278235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.278264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.278621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.278663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.279011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.279041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.279396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.279426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.279781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.279820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.280189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.280218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.280569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.280600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.280968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.280999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.281351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.281381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.281752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.281783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.282156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.282185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.282523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.282551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.282925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.282955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.283324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.283354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.283714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.283743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.284114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.284145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.284496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.284526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.284872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.284902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.285262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.285292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.285667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.285697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.286068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.286097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.286510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.286541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.286885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.286923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.529 [2024-12-09 12:04:27.287299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.529 [2024-12-09 12:04:27.287329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.529 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.287695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.287725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.288103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.288132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.288487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.288518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.288906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.288937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.289296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.289324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.289749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.289779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.290145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.290173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.290555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.290584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.290957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.290987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.291345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.291373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.291716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.291748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.292125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.292155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.292513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.292548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.292916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.292946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.293305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.293335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.293717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.293747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.294110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.294139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.294506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.294536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.294890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.294919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.295274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.295304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.295667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.295699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.296062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.296092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.296447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.296476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.296831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.296863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.297123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.297152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.297512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.297542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.297798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.297829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.298091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.298120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.298468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.298496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.298768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.298798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.299165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.299195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.299542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.299574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.299908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.299939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.300304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.300333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.300688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.530 [2024-12-09 12:04:27.300718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.530 qpair failed and we were unable to recover it. 00:29:19.530 [2024-12-09 12:04:27.301095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.301123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.301490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.301519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.301880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.301911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.302253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.302284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.302667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.302704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.303033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.303062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.303411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.303439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.303781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.303813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.304179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.304208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.304565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.304595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.305034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.305067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.305423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.305452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.305709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.305739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.306141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.306170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.306522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.306553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.306883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.306914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.307291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.307322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.307690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.307721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.308104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.308134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.308492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.308520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.308909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.308940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.309319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.309348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.309707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.309737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.309906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.309935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.310315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.310344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.310695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.310731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.311110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.311138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.311497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.311527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.311865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.311895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.312239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.312269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.312614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.312655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.313041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.313070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.313435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.313466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.313842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.313873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.314241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.314270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.314715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.314746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.315171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.315200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.315562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.315592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.315954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.315985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.531 qpair failed and we were unable to recover it. 00:29:19.531 [2024-12-09 12:04:27.316345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.531 [2024-12-09 12:04:27.316373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.316742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.316773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.317055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.317084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.317431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.317460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.317799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.317832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.318243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.318271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.318627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.318676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.319030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.319059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.319442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.319473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.319813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.319844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.320198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.320228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.320589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.320619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.321000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.321029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.321399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.321428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.321787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.321818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.322179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.322207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.322575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.322605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.322994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.323026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.323396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.323425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.323792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.323822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.324184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.324214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.324572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.324602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.324941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.324972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.325385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.325415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.325857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.325888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.326291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.326320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.326604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.326635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.327011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.327041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.327395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.327425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.327823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.327853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.328109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.328137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.328472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.328501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.328873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.328904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.329269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.329304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.329673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.329704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.330105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.330135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.330497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.330526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.330779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.330809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.331196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.331226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.532 qpair failed and we were unable to recover it. 00:29:19.532 [2024-12-09 12:04:27.331632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.532 [2024-12-09 12:04:27.331676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.332056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.332087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.332463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.332494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.332763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.332793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.333186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.333216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.333558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.333593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.333949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.333980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.334232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.334261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.334652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.334684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.335028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.335060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.335336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.335366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.335720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.335752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.336171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.336203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.336551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.336580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.336957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.336987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.337363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.337393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.337753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.337783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.337926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.337956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.338296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.338326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.338670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.338702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.339054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.339083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.339454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.339489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.339977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.340008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.340174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.340203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.340441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.340471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.340825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.340856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.341226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.341257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.341698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.341728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.342067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.342097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.342446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.342477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.342872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.342903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.343261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.343291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.343522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.343550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.343804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.343834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.344218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.344247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.344478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.344508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.344875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.344906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.345251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.345281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.345699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.345729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.346070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.533 [2024-12-09 12:04:27.346100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.533 qpair failed and we were unable to recover it. 00:29:19.533 [2024-12-09 12:04:27.346478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.346509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.346788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.346817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.347207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.347238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.347482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.347512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.347731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.347762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.348129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.348159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.348527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.348556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.348846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.348876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.349248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.349285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.349650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.349680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.349935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.349963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.350278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.350307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.350657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.350687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.351034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.351063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.351318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.351348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.351736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.351767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.352157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.352186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.352544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.352574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.352931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.352962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.353314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.353345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.353697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.353728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.354097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.354125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.354495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.354526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.354950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.354980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.355227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.355255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.355715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.355745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.356102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.356131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.356507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.356536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.356899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.356929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.357290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.357319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.357703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.357734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.357955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.357985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.358650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.358689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.358933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.358967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.359351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.534 [2024-12-09 12:04:27.359380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.534 qpair failed and we were unable to recover it. 00:29:19.534 [2024-12-09 12:04:27.359634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.359684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.359919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.359950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.360199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.360232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.360630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.360673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.361081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.361111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.361471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.361500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.361853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.361886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.362248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.362278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.362539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.362569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.362921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.362952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.363315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.363347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.363719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.363749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.364123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.364152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.364534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.364565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.364919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.364952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.365329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.365359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.365621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.365661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.365906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.365939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.366296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.366325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.366436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.366466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.366814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.366845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.367219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.367248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.367618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.367658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.368038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.368068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.368420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.368451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.368886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.368917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.369237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.369274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.369530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.369559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.369790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.369820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.370177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.370206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.370576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.370607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.370998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.371030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.371275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.371303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.371673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.371705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.372131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.372160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.372392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.372421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.372728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.372762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.373140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.373171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.373554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.373584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.535 qpair failed and we were unable to recover it. 00:29:19.535 [2024-12-09 12:04:27.373945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.535 [2024-12-09 12:04:27.373975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.374317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.374354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.374712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.374749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.375187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.375218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.375609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.375654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.376011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.376041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.376493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.376522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.376938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.376969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.377350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.377380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.377721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.377752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.378021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.378050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.378274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.378303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.378683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.378714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.379072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.379110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.379328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.379359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.379733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.379764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.380139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.380168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.380503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.380532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.380769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.380800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.381047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.381075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.381303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.381333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.381730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.381759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.382129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.382160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.382546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.382576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.382966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.382996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.383217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.383246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.383591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.383621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.383980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.384011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.384370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.384400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.384766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.384803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.385052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.385080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.385447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.385475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.385699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.385729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.386011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.386041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.386258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.386287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.386528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.386557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.386929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.386959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.387310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.387338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.387697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.387727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.388101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.536 [2024-12-09 12:04:27.388131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.536 qpair failed and we were unable to recover it. 00:29:19.536 [2024-12-09 12:04:27.388503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.537 [2024-12-09 12:04:27.388532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.537 qpair failed and we were unable to recover it. 00:29:19.537 [2024-12-09 12:04:27.388824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.537 [2024-12-09 12:04:27.388856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.537 qpair failed and we were unable to recover it. 00:29:19.537 [2024-12-09 12:04:27.389229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.537 [2024-12-09 12:04:27.389257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.537 qpair failed and we were unable to recover it. 00:29:19.809 [2024-12-09 12:04:27.389620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.809 [2024-12-09 12:04:27.389665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.809 qpair failed and we were unable to recover it. 00:29:19.809 [2024-12-09 12:04:27.390074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.809 [2024-12-09 12:04:27.390103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.809 qpair failed and we were unable to recover it. 00:29:19.809 [2024-12-09 12:04:27.390466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.809 [2024-12-09 12:04:27.390495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.809 qpair failed and we were unable to recover it. 00:29:19.809 [2024-12-09 12:04:27.390844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.809 [2024-12-09 12:04:27.390873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.809 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.391239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.391268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.391626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.391667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.392009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.392038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.392399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.392429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.392774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.392804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.393168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.393198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.393565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.393593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.393960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.393991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.394350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.394380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.394750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.394780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.395154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.395183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.395542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.395572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.395938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.395969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.396336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.396365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.396724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.396755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.397110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.397139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.397491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.397523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.397858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.397889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.398244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.398273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.398629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.398672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.399030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.399059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.399420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.399448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.399811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.399841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.400199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.400231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.400602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.400632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.401008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.401037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.401403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.401433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.401798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.401828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.402053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.402082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.402470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.402501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.402878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.402909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.403266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.403295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.403662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.403692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.404057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.404087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.404438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.404467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.404837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.404867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.405316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.405346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.405709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.405739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.810 [2024-12-09 12:04:27.406141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.810 [2024-12-09 12:04:27.406170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.810 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.406527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.406557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.406923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.406953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.407330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.407359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.407700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.407732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.408019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.408048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.408413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.408442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.408822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.408851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.409211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.409241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.409609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.409651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.410035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.410064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.410412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.410440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.410800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.410845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.411222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.411251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.411613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.411654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.412022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.412051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.412389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.412418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.412781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.412812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.413175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.413203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.413554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.413584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.413963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.413994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.414358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.414386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.414741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.414771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.415154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.415182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.415551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.415579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.415939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.415969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.416365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.416395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.416763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.416795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.417145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.417175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.417545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.417573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.417964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.417994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.418361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.418390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.418753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.418784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.419141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.419170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.419533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.419563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.419897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.419926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.420301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.420332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.420705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.420735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.421078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.421117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.811 [2024-12-09 12:04:27.421479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.811 [2024-12-09 12:04:27.421515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.811 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.421879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.421910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.422286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.422314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.422672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.422703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.423025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.423055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.423360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.423391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.425390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.425458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.425753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.425790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.426141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.426171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.426532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.426562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.426926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.426958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.427323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.427352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.427730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.427761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.428117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.428145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.428524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.428554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.428925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.428957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.429326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.429355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.429735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.429765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.430139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.430168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.430505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.430535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.430808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.430839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.431092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.431122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.431555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.431584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.432025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.432055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.432418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.432448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.432808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.432840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.433195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.433224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.433590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.433625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.434035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.434065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.434431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.434466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.434805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.434837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.435169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.435200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.435574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.435603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.435963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.435994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.436356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.436385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.436762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.436796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.437138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.437168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.437546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.437576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.437788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.437818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.438185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.812 [2024-12-09 12:04:27.438214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.812 qpair failed and we were unable to recover it. 00:29:19.812 [2024-12-09 12:04:27.438578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.438608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.438978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.439010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.439368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.439397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.439742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.439775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.441692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.441757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.442199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.442234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.442685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.442718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.443099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.443127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.443500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.443531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.443894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.443926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.444284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.444314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.444680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.444711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.445073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.445103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.445456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.445485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.445825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.445857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.446234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.446263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.446622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.446664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.447020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.447051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.447424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.447454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.447763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.447793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.448035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.448064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.448461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.448490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.448866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.448897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.449251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.449282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.449658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.449690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.450027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.450056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.450391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.450420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.450785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.450816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.451186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.451215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.451582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.451613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.452101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.452131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.452558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.452587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.452986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.453021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.453395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.453425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.453783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.453815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.454199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.454228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.454604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.454632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.455006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.455035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.455394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.455423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.813 [2024-12-09 12:04:27.455784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.813 [2024-12-09 12:04:27.455812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.813 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.456189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.456218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.456577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.456606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.457045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.457076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.457438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.457467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.457848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.457879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.458244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.458272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.458633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.458675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.459020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.459049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.459410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.459438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.459801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.459831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.460086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.460115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.460543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.460572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.460916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.460945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.461317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.461347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.461607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.461635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.462014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.462049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.462436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.462466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.462835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.462866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.463224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.463254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.463629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.463670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.464038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.464068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.464427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.464456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.464829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.464858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.465087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.465116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.465463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.465491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.465797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.465826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.466189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.466218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.466464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.466492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.466865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.466896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.467255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.467284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.467540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.467569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.467940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.467970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.468327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.468357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.814 qpair failed and we were unable to recover it. 00:29:19.814 [2024-12-09 12:04:27.468719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.814 [2024-12-09 12:04:27.468748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.469076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.469107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.469467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.469496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.469845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.469874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.470241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.470270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.470609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.470658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.471024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.471053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.471477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.471505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.471858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.471888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.472243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.472278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.472634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.472674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.473011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.473039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.473401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.473429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.473847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.473878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.474130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.474159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.474531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.474559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.474931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.474961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.475297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.475326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.475710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.475741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.476021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.476049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.476397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.476426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.476799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.476829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.477175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.477204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.477458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.477492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.477865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.477896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.478254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.478284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.478653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.478683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.479021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.479052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.479468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.479496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.479883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.479913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.480277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.480306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.480674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.480704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.481099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.481128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.481485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.481513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.481922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.481952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.482314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.482343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.482759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.482788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.483157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.483186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.483459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.483488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.483875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.815 [2024-12-09 12:04:27.483906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.815 qpair failed and we were unable to recover it. 00:29:19.815 [2024-12-09 12:04:27.484247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.484277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.484636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.484686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.485070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.485099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.485472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.485501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.485870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.485899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.486261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.486291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.486655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.486685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.487043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.487071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.487434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.487462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.487854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.487885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.488243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.488271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.488654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.488684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.489090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.489119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.489481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.489510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.489754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.489787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.490175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.490203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.490568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.490596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.490960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.490991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.491349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.491377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.491749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.491779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.492155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.492184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.492636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.492676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.493051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.493079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.493486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.493515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.493883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.493913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.494257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.494294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.494626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.494665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.494948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.494975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.495335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.495363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.495707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.495739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.496125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.496153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.496377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.496405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.496779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.496809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.497142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.497170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.497505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.497533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.497916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.497946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.498306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.498334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.498575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.498613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.498983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.499014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.816 [2024-12-09 12:04:27.499386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.816 [2024-12-09 12:04:27.499415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.816 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.499666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.499697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.500084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.500112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.500489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.500518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.500863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.500892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.501289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.501317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.501553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.501585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.501961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.501991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.502346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.502375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.502732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.502762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.503122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.503152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.503519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.503547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.503952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.503983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.504345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.504373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.504730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.504760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.505086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.505116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.505479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.505508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.505866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.505897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.506253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.506281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.506651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.506681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.506923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.506951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.507326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.507355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.507811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.507842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.508196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.508231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.508588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.508616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.508918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.508953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.509379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.509408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.509747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.509778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.510145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.510173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.510411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.510443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.510674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.510708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.511069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.511098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.511475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.511504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.511746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.511775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.512147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.512175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.512537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.512566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.512950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.512980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.513352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.513381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.513814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.513844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.514101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.514128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.817 qpair failed and we were unable to recover it. 00:29:19.817 [2024-12-09 12:04:27.514497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.817 [2024-12-09 12:04:27.514525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.514864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.514896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.515249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.515277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.515650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.515680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.516059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.516088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.516428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.516456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.516828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.516859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.517213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.517241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.517612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.517651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.517889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.517918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.518270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.518298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.518705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.518735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.519142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.519184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.519539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.519568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.519954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.519984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.520403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.520432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.520797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.520828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.521172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.521202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.521552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.521580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.521850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.521879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.522245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.522274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.522650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.522680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.523034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.523062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.523467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.523496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.523833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.523862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.524202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.524230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.524636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.524692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.524973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.525001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.525373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.525403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.525767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.525797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.526166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.526196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.526558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.526587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.526852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.526881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.527167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.527196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.527558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.818 [2024-12-09 12:04:27.527587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.818 qpair failed and we were unable to recover it. 00:29:19.818 [2024-12-09 12:04:27.527953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.527982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.528349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.528377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.528748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.528780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.529142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.529171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.529538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.529567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.529932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.529963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.530315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.530344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.530698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.530729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.530991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.531019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.531255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.531284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.531652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.531682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.532053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.532081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.532446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.532474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.532841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.532872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.533233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.533262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.533623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.533664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.534002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.534031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.534404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.534432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.534793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.534828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.535084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.535113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.535449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.535479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.535827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.535857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.536174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.536203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.536562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.536590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.536873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.536904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.537265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.537293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.537659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.537690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.538124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.538154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.538512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.538540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.538883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.538914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.539280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.539308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.539678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.539708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.540062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.540091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.540458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.540486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.540837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.540866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.541229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.541257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.541620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.541672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.542120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.542150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.542515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.542544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.819 [2024-12-09 12:04:27.542953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.819 [2024-12-09 12:04:27.542983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.819 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.543310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.543341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.543729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.543758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.544138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.544168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.544564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.544593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.545031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.545060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.545429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.545463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.545810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.545839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.546238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.546266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.546635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.546688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.547094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.547125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.547479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.547507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.547862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.547893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.548204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.548232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.548651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.548681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.549049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.549078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.549435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.549463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.549819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.549848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.550013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.550041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.550429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.550458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.550830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.550861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.551298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.551327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.551699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.551730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.552105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.552133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.552575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.552604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.552983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.553014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.553375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.553404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.553668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.553699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.554047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.554076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.554438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.554467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.554844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.554874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.555240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.555268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.555632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.555674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.556043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.556079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.556450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.556478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.556857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.556887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.557243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.557271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.557631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.557678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.558004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.558032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.820 [2024-12-09 12:04:27.558396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.820 [2024-12-09 12:04:27.558425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.820 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.558785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.558816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.559183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.559211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.559572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.559601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.559938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.559967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.560323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.560352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.560720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.560750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.561142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.561170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.561532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.561562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.561904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.561934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.562200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.562228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.562586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.562616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.562958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.562987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.563357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.563386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.563751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.563781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.564216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.564245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.564584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.564613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.564966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.564996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.565358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.565387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.565657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.565688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.566039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.566068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.566420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.566448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.566810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.566841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.567213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.567242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.567581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.567609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.568035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.568066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.568432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.568460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.568799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.568828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.569179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.569208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.569570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.569599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.569966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.569996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.570364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.570394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.570754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.570784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.571150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.571178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.571544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.571572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.571947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.571979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.572350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.572378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.572741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.572771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.573138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.573166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.573527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.821 [2024-12-09 12:04:27.573555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.821 qpair failed and we were unable to recover it. 00:29:19.821 [2024-12-09 12:04:27.573933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.573964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.574339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.574367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.574732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.574763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.575129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.575157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.575527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.575555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.575993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.576023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.576382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.576412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.576720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.576749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.577112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.577141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.577503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.577532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.577876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.577906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.578257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.578287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.578695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.578725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.579103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.579131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.579475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.579505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.579889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.579918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.580293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.580321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.580674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.580703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.581054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.581091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.581456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.581485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.581857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.581887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.582264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.582292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.582655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.582691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.583103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.583131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.583479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.583507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.583873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.583904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.584278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.584306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.584675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.584704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.585106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.585136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.585506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.585534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.585906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.585936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.586320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.586349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.586609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.586651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.587034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.587064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.587415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.587444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.587829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.587859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.588224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.588254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.588651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.588681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.589042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.589070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.589439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.822 [2024-12-09 12:04:27.589468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.822 qpair failed and we were unable to recover it. 00:29:19.822 [2024-12-09 12:04:27.589839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.589868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.590238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.590266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.590704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.590734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.591105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.591133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.591487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.591516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.591791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.591822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.592187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.592216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.592480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.592510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.592895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.592925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.593289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.593322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.593685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.593715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.594104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.594133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.594517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.594545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.594904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.594934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.595296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.595325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.595660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.595691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.596066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.596094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.596469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.596498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.596876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.596905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.597282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.597312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.597677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.597708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.598071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.598100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.598487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.598515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.598865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.598895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.599293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.599322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.599687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.599717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.600083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.600113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.600476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.600506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.600886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.600915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.601342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.601371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.601737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.601767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.602134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.602162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.602504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.602533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.602926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.602956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.823 qpair failed and we were unable to recover it. 00:29:19.823 [2024-12-09 12:04:27.603319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.823 [2024-12-09 12:04:27.603347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.603605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.603634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.603896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.603932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.604283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.604312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.604676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.604707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.605099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.605128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.605498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.605526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.605927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.605957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.606204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.606233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.606599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.606627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.606996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.607027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.607385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.607414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.607773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.607804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.608040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.608069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.608417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.608447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.608816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.608846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.609212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.609242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.609613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.609652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.609929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.609958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.610310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.610347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.610689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.610721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.610974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.611002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.611357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.611386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.611707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.611737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.612104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.612132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.612495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.612524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.612910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.612940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.613300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.613330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.613699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.613729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.614112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.614140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.614489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.614518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.614752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.614781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.615140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.615169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.615539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.615567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.615939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.615968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.616323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.616352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.616713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.616744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.616974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.617002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.617348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.617376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.617729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.824 [2024-12-09 12:04:27.617760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.824 qpair failed and we were unable to recover it. 00:29:19.824 [2024-12-09 12:04:27.617986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.618015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.618406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.618435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.618828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.618858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.619231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.619265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.619511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.619540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.619765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.619795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.620159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.620187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.620529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.620559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.620939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.620970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.621323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.621351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.621627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.621682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.621973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.622002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.622372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.622401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.622747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.622777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.623162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.623191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.623545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.623573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.623807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.623837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.624246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.624275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.624631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.624674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.625020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.625049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.625393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.625422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.625793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.625825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.626193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.626222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.626584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.626614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.627006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.627035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.627392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.627420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.627792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.627823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.628195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.628224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.628659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.628689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.629066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.629094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.629448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.629482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.629735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.629766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.629998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.630028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.630425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.630455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.630678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.630708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.631043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.631072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.631454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.631483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.631853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.631884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.632274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.632303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.825 [2024-12-09 12:04:27.632679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.825 [2024-12-09 12:04:27.632709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.825 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.633074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.633102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.633467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.633495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.633818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.633848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.634204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.634232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.634489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.634519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.634891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.634922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.635299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.635328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.635697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.635726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.636094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.636122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.636549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.636577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.636992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.637022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.637395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.637430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.637782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.637812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.638195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.638223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.638593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.638621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.638857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.638886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.639249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.639278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.639538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.639572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.639924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.639955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.640293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.640322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.640689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.640720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.641120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.641148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.641517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.641545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.641883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.641913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.642285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.642314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.642688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.642718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.643084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.643112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.643478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.643507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.643805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.643834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.644212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.644240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.644604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.644632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.645010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.645040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.645402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.645430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.645734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.645765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.646129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.646157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.646528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.646557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.646963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.646994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.647360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.647389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.647740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.647771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.826 qpair failed and we were unable to recover it. 00:29:19.826 [2024-12-09 12:04:27.648117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.826 [2024-12-09 12:04:27.648146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.648471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.648499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.648868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.648898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.649187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.649216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.649555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.649585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.649943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.649973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.650323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.650353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.650714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.650745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.651107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.651137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.651363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.651391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.651753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.651782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.652138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.652166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.652529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.652559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.652933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.652962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.653323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.653351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.653772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.653803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.654206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.654235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.654590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.654618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.654990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.655019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.655365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.655395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.655795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.655826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.656191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.656219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.656591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.656620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.656992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.657021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.657386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.657414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.657767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.657797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.658165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.658195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.658562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.658591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.658918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.658949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.659313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.659342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.659670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.659701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.660056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.660085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.660451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.660479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.660862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.660892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.661260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.661289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.827 qpair failed and we were unable to recover it. 00:29:19.827 [2024-12-09 12:04:27.661649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.827 [2024-12-09 12:04:27.661679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.662031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.662060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.662420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.662449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.662818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.662849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.663200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.663237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.663611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.663651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.663891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.663920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.664290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.664319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.664687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.664717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.665095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.665123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.665471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.665500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.665885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.665920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.666276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.666304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.666590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.666619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.667031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.667060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.667430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.667460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.667843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.667873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.668231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.668259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.668626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.668687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.669021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.669050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.669405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.669433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.669805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.669837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.670164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.670192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.670559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.670587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.670972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.671001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.671380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.671409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.671771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.671803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.672177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.672205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.672559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.672596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.672949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.672979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.673342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.673370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.673743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.673773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.674147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.674175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.674529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.674558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.674914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.674943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.675296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.675325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.675664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.675695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.676053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.676081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.676454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.676494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.828 [2024-12-09 12:04:27.676856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.828 [2024-12-09 12:04:27.676886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.828 qpair failed and we were unable to recover it. 00:29:19.829 [2024-12-09 12:04:27.677230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.829 [2024-12-09 12:04:27.677258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.829 qpair failed and we were unable to recover it. 00:29:19.829 [2024-12-09 12:04:27.677628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.829 [2024-12-09 12:04:27.677671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.829 qpair failed and we were unable to recover it. 00:29:19.829 [2024-12-09 12:04:27.678022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.829 [2024-12-09 12:04:27.678053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.829 qpair failed and we were unable to recover it. 00:29:19.829 [2024-12-09 12:04:27.678418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.829 [2024-12-09 12:04:27.678446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.829 qpair failed and we were unable to recover it. 00:29:19.829 [2024-12-09 12:04:27.678813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.829 [2024-12-09 12:04:27.678844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.829 qpair failed and we were unable to recover it. 00:29:19.829 [2024-12-09 12:04:27.679198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.829 [2024-12-09 12:04:27.679226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.829 qpair failed and we were unable to recover it. 00:29:19.829 [2024-12-09 12:04:27.679590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.829 [2024-12-09 12:04:27.679619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.829 qpair failed and we were unable to recover it. 00:29:19.829 [2024-12-09 12:04:27.680008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.829 [2024-12-09 12:04:27.680039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.829 qpair failed and we were unable to recover it. 00:29:19.829 [2024-12-09 12:04:27.680396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.829 [2024-12-09 12:04:27.680424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.829 qpair failed and we were unable to recover it. 00:29:19.829 [2024-12-09 12:04:27.680786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.829 [2024-12-09 12:04:27.680817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.829 qpair failed and we were unable to recover it. 00:29:19.829 [2024-12-09 12:04:27.681175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.829 [2024-12-09 12:04:27.681204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.829 qpair failed and we were unable to recover it. 00:29:19.829 [2024-12-09 12:04:27.681559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.829 [2024-12-09 12:04:27.681589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.829 qpair failed and we were unable to recover it. 00:29:19.829 [2024-12-09 12:04:27.682017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.829 [2024-12-09 12:04:27.682047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.829 qpair failed and we were unable to recover it. 00:29:19.829 [2024-12-09 12:04:27.682411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.829 [2024-12-09 12:04:27.682440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.829 qpair failed and we were unable to recover it. 00:29:19.829 [2024-12-09 12:04:27.682803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.829 [2024-12-09 12:04:27.682833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:19.829 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.683197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.683228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.102 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.683569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.683601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.102 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.683996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.684026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.102 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.684432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.684462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.102 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.684781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.684810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.102 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.685185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.685214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.102 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.685652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.685684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.102 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.686047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.686075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.102 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.686446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.686475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.102 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.686861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.686892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.102 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.687262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.687297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.102 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.687571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.687600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.102 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.687955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.687984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.102 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.688351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.688380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.102 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.688748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.688779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.102 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.689133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.689162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.102 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.689537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.689565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.102 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.689928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.689959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.102 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.690316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.690344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.102 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.690668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.690699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.102 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.691068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.691097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.102 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.691504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.691532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.102 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.691881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.691911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.102 qpair failed and we were unable to recover it. 00:29:20.102 [2024-12-09 12:04:27.692256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.102 [2024-12-09 12:04:27.692284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.692654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.692684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.692919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.692947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.693317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.693345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.693704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.693735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.694117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.694145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.694554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.694583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.694962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.694992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.695356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.695384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.695751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.695781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.696146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.696175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.696536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.696566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.696907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.696937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.697295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.697324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.697689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.697719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.698093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.698122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.698490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.698519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.698771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.698800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.699063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.699091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.699442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.699471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.699816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.699847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.700198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.700228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.700586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.700614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.700992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.701022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.701384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.701412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.701765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.701795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.702174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.702203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.702575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.702613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.702987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.703022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.703391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.703420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.703715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.703745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.704125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.704154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.704407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.704436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.704809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.704838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.705201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.705230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.705592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.705621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.706011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.706041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.706406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.706434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.706805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.706835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.103 [2024-12-09 12:04:27.707211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.103 [2024-12-09 12:04:27.707239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.103 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.707595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.707624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.707997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.708026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.708393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.708422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.708774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.708805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.709156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.709185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.709548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.709576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.709966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.709995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.710352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.710380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.710757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.710788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.711155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.711183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.711547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.711576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.711925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.711956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.712264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.712293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.712665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.712697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.713052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.713080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.713446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.713480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.713822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.713852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.714225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.714254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.714615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.714660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.715008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.715037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.715395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.715424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.715784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.715814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.716171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.716200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.716557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.716586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.716952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.716982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.717352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.717380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.717756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.717787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.718165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.718193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.718560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.718589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.718963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.718994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.719351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.719381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.719738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.719768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.720144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.720172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.720504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.720532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.720914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.720943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.721300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.721329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.721704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.721734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.722119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.722147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.722502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.104 [2024-12-09 12:04:27.722531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.104 qpair failed and we were unable to recover it. 00:29:20.104 [2024-12-09 12:04:27.722895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.722926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.723340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.723369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.723721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.723751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.724047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.724081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.724414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.724444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.724798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.724828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.725183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.725211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.725560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.725588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.725935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.725965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.726337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.726366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.726745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.726776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.727147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.727176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.727531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.727559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.727914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.727945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.728315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.728345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.728703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.728733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.729095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.729124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.729489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.729518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.729904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.729933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.730301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.730329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.730693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.730723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.731084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.731113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.731464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.731493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.731858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.731888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.732242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.732269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.732650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.732680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.733016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.733044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.733412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.733441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.733804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.733833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.734177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.734205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.734568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.734597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.734993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.735025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.735382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.735412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.735769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.735799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.736160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.736189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.736559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.736588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.736988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.737019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.737379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.737409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.737780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.737810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.105 qpair failed and we were unable to recover it. 00:29:20.105 [2024-12-09 12:04:27.738190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.105 [2024-12-09 12:04:27.738220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.738568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.738598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.738909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.738940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.739304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.739334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.739654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.739684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.739926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.739959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.740235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.740265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.740628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.740669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.740909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.740938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.741281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.741310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.741674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.741705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.742092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.742121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.742489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.742518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.742886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.742916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.743273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.743302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.743680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.743709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.744093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.744121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.744484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.744514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.744883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.744913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.745166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.745195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.745544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.745573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.745923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.745953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.746310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.746338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.746695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.746726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.747129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.747157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.747520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.747549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.747914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.747944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.748189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.748221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.748606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.748635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.748984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.749015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.749350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.749378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.749733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.749765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.750053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.750087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.750469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.106 [2024-12-09 12:04:27.750499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.106 qpair failed and we were unable to recover it. 00:29:20.106 [2024-12-09 12:04:27.750749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.750779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.751148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.751177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.751537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.751565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.751933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.751963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.752369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.752397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.752773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.752804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.753141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.753170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.753550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.753578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.753925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.753954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.754330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.754359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.754723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.754754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.755009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.755041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.755311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.755339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.755714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.755744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.756081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.756111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.756546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.756575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.756815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.756849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.757180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.757208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.757573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.757601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.757950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.757980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.758407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.758437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.758615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.758668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.759035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.759065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.759406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.759446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.759785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.759816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.760158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.760193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.760563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.760591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.760822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.760855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.761079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.761110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.761481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.761510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.761866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.761897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.762248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.762278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.762621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.762661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.763045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.763075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.763326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.763354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.763702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.763732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.764103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.764132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.764513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.764541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.764915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.764944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.107 [2024-12-09 12:04:27.765307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.107 [2024-12-09 12:04:27.765336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.107 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.765697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.765735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.766106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.766135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.766477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.766506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.766912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.766942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.767301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.767331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.767700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.767730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.768094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.768122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.768543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.768571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.768808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.768840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.769176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.769205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.769503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.769531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.769898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.769927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.770292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.770321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.770557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.770589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.770973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.771004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.771363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.771392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.771754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.771784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.772159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.772188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.772551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.772581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.772970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.773000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.773243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.773272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.773650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.773680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.774041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.774069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.774309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.774341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.774490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.774517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.774913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.774945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.775363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.775393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.775746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.775783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.776111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.776140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.776498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.776528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.776874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.776904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.777263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.777293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.777658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.777688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.778072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.778101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.778470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.778498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.778777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.778806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.779141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.779169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.779528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.779557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.779924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.108 [2024-12-09 12:04:27.779953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.108 qpair failed and we were unable to recover it. 00:29:20.108 [2024-12-09 12:04:27.780310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.780338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.780700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.780730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.781136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.781164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.781497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.781526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.781809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.781839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.782079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.782107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.782475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.782503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.782798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.782827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.783202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.783230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.783569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.783599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.783990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.784019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.784375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.784404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.784762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.784793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.785155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.785183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.785556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.785590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.786034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.786065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.786421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.786449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.786812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.786842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.787208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.787237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.787585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.787614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.787869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.787900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.788261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.788290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.788668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.788698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.789045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.789075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.789419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.789447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.789788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.789819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.790187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.790215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.790482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.790510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.790891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.790922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.791159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.791187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.791621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.791661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.792025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.792054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.792410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.792439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.792803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.792833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.793051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.793079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.793452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.793480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.793798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.793828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.794215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.794243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.794610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.109 [2024-12-09 12:04:27.794647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.109 qpair failed and we were unable to recover it. 00:29:20.109 [2024-12-09 12:04:27.795025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.795055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.795471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.795499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.795852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.795887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.796249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.796277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.796612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.796652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.797001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.797030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.797382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.797410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.797763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.797793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.798113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.798142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.798489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.798517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.798839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.798868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.799234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.799263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.799646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.799676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.799937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.799965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.800320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.800348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.800721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.800751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.801119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.801148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.801417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.801446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.801736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.801766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.802178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.802207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.802656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.802686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.803059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.803088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.803456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.803484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.803863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.803892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.804271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.804299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.804658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.804688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.805037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.805065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.805454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.805482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.805727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.805756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.806169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.806203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.806545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.806574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.806955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.806985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.807342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.807370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.807715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.807745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.808023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.808052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.808330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.808358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.808755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.808784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.809139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.809169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.809525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.809554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.110 [2024-12-09 12:04:27.809898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.110 [2024-12-09 12:04:27.809928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.110 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.810289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.810319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.810681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.810710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.811097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.811125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.811504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.811534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.811905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.811934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.812266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.812295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.812663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.812693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.813078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.813106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.813468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.813497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.813879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.813909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.814248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.814276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.814622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.814675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.815074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.815103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.815463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.815493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.815864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.815893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.816271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.816300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.816661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.816690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.817089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.817118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.817460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.817490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.817836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.817868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 234496 Killed "${NVMF_APP[@]}" "$@" 00:29:20.111 [2024-12-09 12:04:27.818234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.818264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.818628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.818669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 12:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:20.111 [2024-12-09 12:04:27.818908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.818938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.819205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.819234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 12:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:20.111 12:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:20.111 [2024-12-09 12:04:27.819649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.819680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 12:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:20.111 [2024-12-09 12:04:27.820040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.820070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 12:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.111 [2024-12-09 12:04:27.820444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.820474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.820828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.820865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.821210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.821238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.821512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.821541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.821877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.111 [2024-12-09 12:04:27.821909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.111 qpair failed and we were unable to recover it. 00:29:20.111 [2024-12-09 12:04:27.822230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.822259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.822616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.822657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.823101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.823130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.823483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.823510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.823872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.823902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.824263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.824292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.824717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.824748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.825136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.825166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.825541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.825571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.825970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.826002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.826252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.826282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.826706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.826737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.827116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.827146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.827394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.827425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.827771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.827802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.828186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.828216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.828466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.828497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 12:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=235529 00:29:20.112 [2024-12-09 12:04:27.828757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.828789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 12:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 235529 00:29:20.112 [2024-12-09 12:04:27.829146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.829177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 12:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 235529 ']' 00:29:20.112 12:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:20.112 [2024-12-09 12:04:27.829549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.829580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 12:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.112 [2024-12-09 12:04:27.829939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.829973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 12:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:20.112 [2024-12-09 12:04:27.830226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.830258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 wit 12:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.112 h addr=10.0.0.2, port=4420 00:29:20.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 12:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:20.112 [2024-12-09 12:04:27.830616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.830659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 12:04:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.112 [2024-12-09 12:04:27.831067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.831101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.831508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.831542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.831898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.831932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.832302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.832332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.832707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.832738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.833139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.833169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.833539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.833571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.833973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.834004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.834398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.834430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.834727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.834764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.835154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.835183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.835455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.112 [2024-12-09 12:04:27.835491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.112 qpair failed and we were unable to recover it. 00:29:20.112 [2024-12-09 12:04:27.835837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.835868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.836223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.836257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.836605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.836658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.837038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.837069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.837338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.837369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.837721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.837753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.838014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.838045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.838425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.838455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.838830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.838861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.839142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.839176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.839544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.839574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.839959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.839992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.840270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.840300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.840662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.840695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.841129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.841159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.841511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.841542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.841793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.841826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.842222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.842252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.842602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.842632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.842941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.842972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.843331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.843370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.843763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.843794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.844175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.844206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.844453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.844486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.844827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.844863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.845228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.845258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.845633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.845674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.846077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.846106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.846469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.846499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.846873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.846902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.847263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.847292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.847651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.847681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.847869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.847898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.848273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.848303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.848681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.848712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.849089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.849120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.849440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.849471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.849864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.849898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.850277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.850308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.850684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.113 [2024-12-09 12:04:27.850715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.113 qpair failed and we were unable to recover it. 00:29:20.113 [2024-12-09 12:04:27.851111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.851140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.851489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.851518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.851960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.851991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.852373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.852402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.852775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.852804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.853189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.853219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.853473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.853501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.853833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.853863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.854213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.854243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.854605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.854635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.854907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.854937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.855330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.855362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.855719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.855750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.856158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.856187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.856604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.856635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.857010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.857039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.857423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.857453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.857852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.857882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.858256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.858285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.858663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.858693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.859075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.859105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.859464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.859494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.859865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.859896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.860240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.860271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.860393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.860423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.860793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.860825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.861185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.861215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.861485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.861513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.861919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.861949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.862176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.862206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.862459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.862487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.862739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.862768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.863151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.863181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.863565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.863593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.864025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.864055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.864422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.864453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.864821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.864852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.865102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.865131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.865394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.114 [2024-12-09 12:04:27.865423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.114 qpair failed and we were unable to recover it. 00:29:20.114 [2024-12-09 12:04:27.865787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.865817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.866197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.866226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.866497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.866526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.866958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.866988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.867378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.867408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.867777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.867806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.868055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.868083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.868443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.868471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.868745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.868776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.869024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.869053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.869423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.869452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.869828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.869860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.870240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.870268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.870681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.870719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.871099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.871128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.871509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.871538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.871927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.871958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.872216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.872245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.872613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.872653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.873004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.873035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.873290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.873320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.873681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.873712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.874113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.874142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.874507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.874536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.874749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.874780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.875163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.875193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.875600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.875629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.875923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.875952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.876305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.876334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.876687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.876716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.877086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.877116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.877508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.877537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.877927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.877960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.878341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.878371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.878798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.115 [2024-12-09 12:04:27.878828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.115 qpair failed and we were unable to recover it. 00:29:20.115 [2024-12-09 12:04:27.879195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.879224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.879602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.879630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.880044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.880074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.880454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.880485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.880857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.880887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.881128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.881162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.881424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.881453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.881838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.881870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.882261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.882289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.882670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.882700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.882818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.882846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.883236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.883266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.883634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.883677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.884039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.884069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.884479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.884509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.884907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.884938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 [2024-12-09 12:04:27.884930] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.885002] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.116 [2024-12-09 12:04:27.885311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.885343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.885722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.885754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.886126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.886156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.886415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.886446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.886680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.886713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.887082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.887114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.887325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.887357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.887594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.887624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.887918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.887950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.888199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.888229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.888484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.888516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.888884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.888916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.889168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.889199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.889453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.889484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.889851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.889884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.890238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.890274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.890673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.890704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.890955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.890985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.891362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.891392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.891649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.891680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.892068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.892098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.892333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.892362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.116 qpair failed and we were unable to recover it. 00:29:20.116 [2024-12-09 12:04:27.892758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.116 [2024-12-09 12:04:27.892790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.893185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.893216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.893586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.893616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.894015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.894047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.894456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.894486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.894760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.894792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.895151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.895181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.895547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.895579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.895846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.895878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.896211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.896241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.896462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.896492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.896748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.896781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.897160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.897190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.897558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.897589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.897988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.898020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.898367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.898398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.898774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.898805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.899171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.899201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.899539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.899569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.899960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.899991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.900338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.900374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.900737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.900769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.901136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.901166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.901522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.901553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.901992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.902022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.902396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.902425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.902803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.902834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.903213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.903243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.903618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.903659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.904024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.904054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.904476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.904506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.904873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.904905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.905287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.905317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.905702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.905732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.906111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.906141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.906390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.906420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.906684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.906716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.907115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.907145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.907508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.907539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.907884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.117 [2024-12-09 12:04:27.907915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.117 qpair failed and we were unable to recover it. 00:29:20.117 [2024-12-09 12:04:27.908265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.908295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.908679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.908709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.909082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.909111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.909466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.909495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.909876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.909907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.910280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.910308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.910659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.910690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.911059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.911089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.911334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.911363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.911627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.911669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.912056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.912084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.912486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.912515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.912920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.912952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.913332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.913362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.913754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.913784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.914169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.914198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.914583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.914611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.915028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.915057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.915421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.915450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.915842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.915872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.916243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.916272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.916662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.916694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.917062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.917093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.917457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.917488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.917890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.917921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.918285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.918314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.918547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.918576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.918981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.919011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.919396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.919425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.919786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.919817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.920194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.920223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.920593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.920624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.921005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.921035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.921399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.921428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.921799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.921829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.922204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.922233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.922551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.922580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.922964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.922994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.923367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.923397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.118 qpair failed and we were unable to recover it. 00:29:20.118 [2024-12-09 12:04:27.923779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.118 [2024-12-09 12:04:27.923810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.924181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.924209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.924651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.924681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.925060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.925091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.925448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.925477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.925853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.925884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.926245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.926275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.926582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.926610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.926971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.927001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.927385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.927420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.927800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.927830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.928193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.928221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.928599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.928628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.928998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.929028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.929416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.929445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.929795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.929826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.930105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.930134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.930486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.930515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.930904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.930934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.931298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.931326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.931706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.931736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.932103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.932132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.932508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.932537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.932901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.932933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.933311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.933340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.933704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.933735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.934109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.934138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.934497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.934525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.934886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.934916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.935210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.935239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.935628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.935666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.936038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.936068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.936445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.936474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.936766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.936795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.937185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.937213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.937426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.937458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.937819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.937856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.938095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.938124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.938515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.938544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.119 qpair failed and we were unable to recover it. 00:29:20.119 [2024-12-09 12:04:27.938899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.119 [2024-12-09 12:04:27.938930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.939302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.939331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.939687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.939717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.940119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.940148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.940540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.940570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.940963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.940993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.941341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.941370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.941634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.941681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.941950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.941979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.942400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.942429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.942818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.942849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.943210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.943239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.943505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.943534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.943756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.943789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.944187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.944216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.944590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.944619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.945001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.945031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.945292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.945321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.945684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.945715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.945992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.946020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.946392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.946421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.946718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.946749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.947107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.947135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.947393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.947421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.947659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.947696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.948063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.948092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.948475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.948504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.948928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.948958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.949335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.949363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.949733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.949763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.950136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.950167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.950558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.950587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.950968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.950998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.951270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.951298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.951519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.120 [2024-12-09 12:04:27.951559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.120 qpair failed and we were unable to recover it. 00:29:20.120 [2024-12-09 12:04:27.951908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.951939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.952315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.952344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.952608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.952646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.953035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.953064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.953437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.953466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.953803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.953833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.954214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.954242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.954614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.954654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.955060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.955088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.955465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.955493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.955878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.955909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.956289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.956326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.956714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.956744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.957106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.957135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.957505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.957534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.957908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.957937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.958173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.958202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.958602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.958631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.958975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.959004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.959383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.959412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.959777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.959808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.960122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.960151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.960390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.960422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.960867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.960896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.961272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.961300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.961672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.961702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.962040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.962071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.962430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.962459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.962821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.962852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.963215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.963243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.963609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.963651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.964014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.964042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.964375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.964405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.964650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.964685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.964959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.964989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.965353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.965381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.965762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.965793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.966163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.966193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.966544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.966572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.121 [2024-12-09 12:04:27.966911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.121 [2024-12-09 12:04:27.966942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.121 qpair failed and we were unable to recover it. 00:29:20.122 [2024-12-09 12:04:27.967180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.122 [2024-12-09 12:04:27.967209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.122 qpair failed and we were unable to recover it. 00:29:20.122 [2024-12-09 12:04:27.967575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.122 [2024-12-09 12:04:27.967604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.122 qpair failed and we were unable to recover it. 00:29:20.122 [2024-12-09 12:04:27.967981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.122 [2024-12-09 12:04:27.968012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.122 qpair failed and we were unable to recover it. 00:29:20.122 [2024-12-09 12:04:27.968372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.122 [2024-12-09 12:04:27.968401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.122 qpair failed and we were unable to recover it. 00:29:20.122 [2024-12-09 12:04:27.968813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.122 [2024-12-09 12:04:27.968843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.122 qpair failed and we were unable to recover it. 00:29:20.122 [2024-12-09 12:04:27.969105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.122 [2024-12-09 12:04:27.969134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.122 qpair failed and we were unable to recover it. 00:29:20.122 [2024-12-09 12:04:27.969512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.122 [2024-12-09 12:04:27.969541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.122 qpair failed and we were unable to recover it. 00:29:20.122 [2024-12-09 12:04:27.969895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.122 [2024-12-09 12:04:27.969926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.122 qpair failed and we were unable to recover it. 00:29:20.122 [2024-12-09 12:04:27.970307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.122 [2024-12-09 12:04:27.970335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.122 qpair failed and we were unable to recover it. 00:29:20.122 [2024-12-09 12:04:27.970706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.122 [2024-12-09 12:04:27.970756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.122 qpair failed and we were unable to recover it. 00:29:20.122 [2024-12-09 12:04:27.971127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.122 [2024-12-09 12:04:27.971156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.122 qpair failed and we were unable to recover it. 00:29:20.122 [2024-12-09 12:04:27.971527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.122 [2024-12-09 12:04:27.971556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.122 qpair failed and we were unable to recover it. 00:29:20.122 [2024-12-09 12:04:27.971900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.122 [2024-12-09 12:04:27.971930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.122 qpair failed and we were unable to recover it. 00:29:20.122 [2024-12-09 12:04:27.972272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.122 [2024-12-09 12:04:27.972301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.122 qpair failed and we were unable to recover it. 00:29:20.122 [2024-12-09 12:04:27.972662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.122 [2024-12-09 12:04:27.972693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.122 qpair failed and we were unable to recover it. 00:29:20.122 [2024-12-09 12:04:27.972993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.122 [2024-12-09 12:04:27.973022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.122 qpair failed and we were unable to recover it. 00:29:20.122 [2024-12-09 12:04:27.973369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.122 [2024-12-09 12:04:27.973397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.122 qpair failed and we were unable to recover it. 00:29:20.122 [2024-12-09 12:04:27.973676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.122 [2024-12-09 12:04:27.973717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.122 qpair failed and we were unable to recover it. 00:29:20.122 [2024-12-09 12:04:27.974074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.122 [2024-12-09 12:04:27.974104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.122 qpair failed and we were unable to recover it. 00:29:20.122 [2024-12-09 12:04:27.974439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.122 [2024-12-09 12:04:27.974468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.122 qpair failed and we were unable to recover it. 00:29:20.122 [2024-12-09 12:04:27.974856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.122 [2024-12-09 12:04:27.974886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.122 qpair failed and we were unable to recover it. 00:29:20.122 [2024-12-09 12:04:27.975196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.122 [2024-12-09 12:04:27.975226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.122 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.975578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.975610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.975852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.975884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.976258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.976287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.976527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.976555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.976938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.976967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.977190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.977219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.977489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.977517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.977858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.977889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.978266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.978294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.978661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.978692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.978945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.978973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.979363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.979392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.979732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.979762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.980149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.980177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.980550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.980580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.980929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.980959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.981304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.981333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.981699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.981729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.982163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.982191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.982554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.982582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.982836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.982866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.983242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.983271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.983662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.983715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.984056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.984084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.984378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.984406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.396 qpair failed and we were unable to recover it. 00:29:20.396 [2024-12-09 12:04:27.984771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.396 [2024-12-09 12:04:27.984801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.985175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.985203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.985567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.985596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.985979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.986010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.986361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.986391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.986698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.986728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.987052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.987082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.987470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.987499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.987868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.987899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.988203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.988232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.988608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.988646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.988926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.988955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.989337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.989333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:20.397 [2024-12-09 12:04:27.989367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.989731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.989761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.990095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.990125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.990493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.990523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.990867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.990897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.991250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.991280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.991660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.991692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.991924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.991953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.992332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.992362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.992744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.992774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.993119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.993149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.993388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.993418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.993796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.993833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.994213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.994243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.994680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.994709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.995072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.995101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.995469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.995498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.995870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.995900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.996150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.996178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.996556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.996585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.996954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.996984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.997326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.997354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.997624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.997670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.998039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.998068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.998447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.998475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.998849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.998880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.999254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.999283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.397 [2024-12-09 12:04:27.999652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.397 [2024-12-09 12:04:27.999684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.397 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:27.999915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:27.999944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.000326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.000355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.000744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.000775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.001145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.001174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.001430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.001459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.001853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.001883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.002150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.002179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.002543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.002574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.002951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.002982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.003410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.003440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.003802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.003832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.004194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.004230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.004595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.004624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.004967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.004997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.005341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.005370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.005715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.005748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.006096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.006125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.006471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.006499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.006868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.006898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.007298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.007327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.007590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.007618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.008008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.008038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.008409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.008439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.008809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.008839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.009228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.009258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.009600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.009629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.010000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.010030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.010417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.010446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.010814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.010844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.011202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.011231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.011604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.011634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.012011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.012040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.012420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.012448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.012868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.012898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.013273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.013302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.013666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.013698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.013942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.013971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.014365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.014394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.014767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.398 [2024-12-09 12:04:28.014804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.398 qpair failed and we were unable to recover it. 00:29:20.398 [2024-12-09 12:04:28.015182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.015212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.015572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.015600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.015992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.016023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.016373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.016402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.016767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.016797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.017127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.017157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.017524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.017553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.017953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.017983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.018360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.018388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.018653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.018683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.019020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.019050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.019390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.019419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.019774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.019806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.020167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.020196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.020553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.020582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.020927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.020956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.021307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.021337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.021685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.021714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.022109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.022137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.022476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.022504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.022864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.022895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.023305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.023333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.023697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.023728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.024092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.024121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.024474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.024503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.024894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.024923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.025292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.025321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.025691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.025723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.026091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.026120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.026487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.026515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.026866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.026895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.027237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.027266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.027627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.027677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.028036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.028065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.028335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.028364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.028749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.028781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.029151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.029180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.029542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.029572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.029916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.029946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.399 [2024-12-09 12:04:28.030301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.399 [2024-12-09 12:04:28.030331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.399 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.030696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.030726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.031093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.031123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.031506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.031536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.031891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.031922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.032293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.032321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.032698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.032729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.033097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.033126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.033494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.033523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.033761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.033792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.034149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.034177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.034535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.034564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.034863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.034893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.035266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.035295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.035713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.035743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.036068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.036097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.036446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.036475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.036756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.036786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.037157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.037186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.037538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.037567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.037918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.037948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.038285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.038315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.038680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.038710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.039118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.039149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.039533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.039561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.039931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.039961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.040328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.040357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.040716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.040748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.041023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.041058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.041430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.041461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.041861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.041891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.042243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.400 [2024-12-09 12:04:28.042242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.400 [2024-12-09 12:04:28.042272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.400 [2024-12-09 12:04:28.042293] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.400 qpair failed and we were unable to recover it. 00:29:20.400 [2024-12-09 12:04:28.042303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:20.400 [2024-12-09 12:04:28.042311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:20.400 [2024-12-09 12:04:28.042317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.400 [2024-12-09 12:04:28.042629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.042670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.043020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.043048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.043412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.043441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.043784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.043815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.044175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.044204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.044377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:20.401 [2024-12-09 12:04:28.044516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:20.401 [2024-12-09 12:04:28.044614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.044614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:20.401 [2024-12-09 12:04:28.044669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.044615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:20.401 [2024-12-09 12:04:28.045015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.045044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.045430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.045460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.045585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.045612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.045881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.045910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.046163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.046193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.046572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.046601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.046962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.046992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.047354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.047384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.047759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.047790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.048027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.048055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.048418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.048447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.048825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.048855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.049266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.049295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.049534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.049562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.049880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.049911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.050338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.050368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.050717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.050747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.051130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.051159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.051526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.051554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.051917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.051947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.052169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.052197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.052547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.052576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.052913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.052944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.053318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.053347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.053583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.053612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.053867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.053897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.054282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.054310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.054688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.054719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.054966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.054995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.055385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.055414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.055754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.055787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.401 [2024-12-09 12:04:28.056174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.401 [2024-12-09 12:04:28.056203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.401 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.056569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.056599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.056975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.057006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.057382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.057412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.057770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.057800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.058189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.058218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.058505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.058533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.058907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.058937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.059301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.059330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.059693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.059724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.060120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.060161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.060490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.060520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.060885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.060915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.061349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.061378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.061750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.061781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.062117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.062146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.062501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.062531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.062913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.062943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.063307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.063335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.063547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.063576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.063898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.063929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.064301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.064331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.064696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.064726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.065050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.065079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.065382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.065412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.065663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.065697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.066032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.066062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.066417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.066445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.066806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.066836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.067210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.067240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.067608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.067650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.067986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.068016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.068372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.068404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.068745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.068776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.069021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.069050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.069295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.069324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.069678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.069709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.069964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.070000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.070366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.070397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.070628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.402 [2024-12-09 12:04:28.070671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.402 qpair failed and we were unable to recover it. 00:29:20.402 [2024-12-09 12:04:28.070941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.070971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.071179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.071209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.071565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.071595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.071885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.071915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.072255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.072285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.072420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.072452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.072704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.072734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.073108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.073138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.073387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.073417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.073774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.073803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.073938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.073970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.074234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.074264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.074622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.074680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.074952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.074985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.075240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.075269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.075492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.075521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.075802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.075833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.076206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.076236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.076584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.076614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.076922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.076953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.077323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.077352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.077610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.077651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.077918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.077958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.078319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.078348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.078619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.078663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.079073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.079104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.079443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.079471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.079875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.079907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.080307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.080337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.080681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.080710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.080964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.080994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.081368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.081397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.081668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.081698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.082022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.082052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.082448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.082478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.082880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.082910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.083199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.083228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.083588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.083616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.084026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.084057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.084417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.403 [2024-12-09 12:04:28.084446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.403 qpair failed and we were unable to recover it. 00:29:20.403 [2024-12-09 12:04:28.084704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.084735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.085134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.085163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.085543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.085572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.085827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.085857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.086092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.086125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.086475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.086506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.086749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.086783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.087138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.087168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.087408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.087439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.087819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.087849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.088210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.088239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.088604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.088632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.088988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.089017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.089198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.089227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.089577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.089607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.089985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.090017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.090398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.090427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.090675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.090706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.090929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.090958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.091227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.091255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.091618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.091672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.092060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.092091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.092323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.092352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.092659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.092689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.093096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.093126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.093499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.093534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.093873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.093903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.094038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.094067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.094177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.094204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.094556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.094586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.094969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.095006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.095347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.095377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.095595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.095624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.096040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.096071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.096324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.096353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.096595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.096624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.096862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.096893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.097112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.097142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.097542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.097572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.404 [2024-12-09 12:04:28.097824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.404 [2024-12-09 12:04:28.097857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.404 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.097983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.098014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.098377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.098406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.098755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.098787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.099059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.099089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.099448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.099476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.099854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.099884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.100139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.100168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.100588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.100617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.100872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.100902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.101250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.101279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.101652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.101683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.101899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.101928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.102195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.102236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.102460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.102488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.102910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.102942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.103329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.103358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.103613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.103651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.103906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.103936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.104273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.104303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.104681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.104713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.105103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.105133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.105395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.105423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.105808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.105838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.106206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.106234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.106606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.106635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.106882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.106911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.107310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.107341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.107719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.107751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.108103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.108134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.108410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.108439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.108813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.108846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.109185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.109214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.109607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.109649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.109995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.110025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.405 qpair failed and we were unable to recover it. 00:29:20.405 [2024-12-09 12:04:28.110297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.405 [2024-12-09 12:04:28.110327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.110550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.110578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.110949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.110980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.111317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.111349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.111712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.111744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.112093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.112128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.112481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.112510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.112862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.112893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.113115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.113143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.113493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.113522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.113948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.113977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.114328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.114357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.114722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.114752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.114963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.114991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.115303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.115333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.115747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.115777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.116150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.116180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.116563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.116592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.116973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.117003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.117290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.117320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.117475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.117503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.117903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.117933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.118304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.118332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.118772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.118803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.119172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.119201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.119416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.119444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.119813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.119843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.120210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.120239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.120591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.120619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.120993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.121023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.121257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.121285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.121634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.121675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.122069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.122099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.122479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.122508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.122748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.122777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.123101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.123130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.123347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.123375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.123734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.123764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.124062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.124091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.124553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.406 [2024-12-09 12:04:28.124582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.406 qpair failed and we were unable to recover it. 00:29:20.406 [2024-12-09 12:04:28.124804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.124834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.125229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.125258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.125623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.125662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.125936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.125964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.126200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.126229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.126597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.126626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.126871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.126906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.127128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.127157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.127394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.127422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.127784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.127817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.128204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.128232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.128511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.128539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.128888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.128918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.129141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.129169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.129408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.129438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.129800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.129830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.130076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.130104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.130328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.130355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.130722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.130752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.131045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.131073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.131473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.131502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.131755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.131786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.132154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.132183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.132550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.132579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.132975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.133005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.133383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.133413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.133651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.133682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.133907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.133936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.134313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.134342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.134721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.134751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.135119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.135148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.135518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.135546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.135818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.135848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.136201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.136235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.136366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.136395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.136617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.136657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.136900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.136928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.137306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.137334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.137616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.137657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.407 [2024-12-09 12:04:28.138045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.407 [2024-12-09 12:04:28.138073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.407 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.138440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.138469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.138844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.138874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.139083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.139111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.139467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.139496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.139729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.139760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.140132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.140162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.140548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.140576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.140821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.140851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.141189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.141219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.141552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.141582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.141915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.141945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.142331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.142360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.142751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.142781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.143056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.143084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.143442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.143470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.143704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.143735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.144135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.144164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.144547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.144575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.145004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.145034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.145297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.145329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.145573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.145612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.145997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.146026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.146247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.146276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.146406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.146434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.146706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.146736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.147109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.147137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.147383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.147411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.147763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.147794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.148170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.148199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.148567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.148596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.148852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.148881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.149239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.149268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.149635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.149673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.150042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.150072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.150420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.150450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.150705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.150736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.151075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.151103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.151462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.151492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.151837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.151868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.408 qpair failed and we were unable to recover it. 00:29:20.408 [2024-12-09 12:04:28.152001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.408 [2024-12-09 12:04:28.152028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.152284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.152315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.152508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.152537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.152884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.152915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.153150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.153182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.153504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.153534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.153913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.153943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.154295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.154324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.154697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.154727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.154904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.154936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.155217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.155245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.155601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.155630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.155940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.155968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.156335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.156364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.156733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.156763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.157113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.157142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.157511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.157540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.157804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.157833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.158145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.158173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.158524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.158559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.158779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.158809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.159088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.159115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.159478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.159509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.159872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.159902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.160257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.160285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.160513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.160541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.160783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.160816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.161166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.161194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.161557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.161585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.161831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.161861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.162231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.162260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.162471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.162499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.162661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.162691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.163065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.163093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.163367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.163395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.409 [2024-12-09 12:04:28.163847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.409 [2024-12-09 12:04:28.163878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.409 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.164224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.164254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.164479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.164507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.164810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.164840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.165072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.165101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.165467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.165497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.165835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.165865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.166227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.166256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.166614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.166652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.167014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.167042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.167301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.167329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.167674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.167704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.168095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.168123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.168364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.168392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.168764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.168800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.169139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.169169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.169573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.169601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.169966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.169996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.170216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.170244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.170497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.170526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.170943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.170973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.171196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.171224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.171632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.171673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.172024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.172053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.172461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.172490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.172836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.172867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.173262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.173290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.173603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.173632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.174011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.174041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.174291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.174319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.174700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.174730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.175033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.175061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.175298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.175331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.175702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.175735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.175837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.175866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.176237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.176265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.176627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.176687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.177058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.177088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.177422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.177451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.177812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.177844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.410 qpair failed and we were unable to recover it. 00:29:20.410 [2024-12-09 12:04:28.178231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.410 [2024-12-09 12:04:28.178260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.178476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.178512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.178788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.178821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.179059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.179087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.179342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.179371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.179765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.179794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.180163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.180193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.180565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.180593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.180972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.181002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.181221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.181249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.181487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.181515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.181868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.181899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.182272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.182301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.182662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.182692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.183087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.183115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.183332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.183360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.183675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.183705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.183969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.183998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.184335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.184363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.184742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.184773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.185124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.185152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.185518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.185547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.185779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.185809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.186173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.186201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.186573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.186602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.186846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.186877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.187217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.187245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.187525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.187554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.187918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.187955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.188161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.188191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.188559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.188589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.188938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.188968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.189341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.189371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.189612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.189651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.190029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.190059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.190436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.190466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.190822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.190851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.191214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.191243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.191458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.191487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.411 [2024-12-09 12:04:28.191894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.411 [2024-12-09 12:04:28.191925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.411 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.192170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.192199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.192564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.192592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.192961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.192992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.193349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.193377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.193742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.193771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.194144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.194173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.194397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.194425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.194658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.194687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.195054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.195084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.195455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.195485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.195858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.195888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.196270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.196299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.196507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.196536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.196783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.196813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.197035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.197065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.197217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.197246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.197628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.197669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.197811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.197840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.198215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.198244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.198603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.198631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.198914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.198944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.199161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.199191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.199592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.199622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.200005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.200034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.200400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.200430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.200801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.200831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.201190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.201218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.201573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.201602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.201959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.201990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.202381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.202421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.202636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.202675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.203068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.203097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.203434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.203464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.203853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.203883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.204245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.204273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.204634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.204677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.204896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.204924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.205169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.205197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.205585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.412 [2024-12-09 12:04:28.205614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.412 qpair failed and we were unable to recover it. 00:29:20.412 [2024-12-09 12:04:28.205925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.205954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.206335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.206364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.206721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.206752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.207124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.207153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.207530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.207559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.207922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.207952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.208167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.208196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.208501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.208529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.208906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.208936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.209310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.209340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.209560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.209590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.209968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.209998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.210373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.210403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.210756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.210786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.211168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.211197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.211531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.211560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.211929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.211959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.212182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.212216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.212429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.212459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.212824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.212854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.213245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.213273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.213490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.213519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.213858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.213887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.214268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.214297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.214677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.214707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.215075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.215105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.215457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.215485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.215711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.215741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.216089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.216118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.216495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.216524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.216885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.216915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.217288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.217318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.217690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.217720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.218096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.218125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.218390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.218418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.218782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.218812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.219049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.219077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.219436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.219465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.219832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.219862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.220059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.413 [2024-12-09 12:04:28.220088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.413 qpair failed and we were unable to recover it. 00:29:20.413 [2024-12-09 12:04:28.220485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.220514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.220723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.220751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.221133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.221161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.221369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.221398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.221764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.221799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.222027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.222056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.222415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.222444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.222594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.222621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.222981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.223011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.223397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.223428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.223785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.223816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.224170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.224199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.224566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.224595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.224937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.224967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.225192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.225220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.225539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.225568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.225947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.225977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.226336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.226365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.226597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.226626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.226916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.226945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.227160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.227189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.227540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.227569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.227919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.227951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.228329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.228359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.228579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.228608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.228978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.229008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.229383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.229411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.229809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.229840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.230205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.230233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.230599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.230627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.230986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.231016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.231394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.231424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.231779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.231812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.231917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.414 [2024-12-09 12:04:28.231945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.414 qpair failed and we were unable to recover it. 00:29:20.414 [2024-12-09 12:04:28.232224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992e10 is same with the state(6) to be set 00:29:20.414 Read completed with error (sct=0, sc=8) 00:29:20.414 starting I/O failed 00:29:20.414 Read completed with error (sct=0, sc=8) 00:29:20.414 starting I/O failed 00:29:20.414 Read completed with error (sct=0, sc=8) 00:29:20.414 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Write completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Write completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Write completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Write completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Write completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Write completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Write completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Write completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Write completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Write completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Write completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Write completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Write completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 [2024-12-09 12:04:28.232752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Write completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Write completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Write completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Write completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Write completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Write completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Write completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Write completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Write completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Write completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 Read completed with error (sct=0, sc=8) 00:29:20.415 starting I/O failed 00:29:20.415 [2024-12-09 12:04:28.233540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.415 [2024-12-09 12:04:28.233819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.415 [2024-12-09 12:04:28.233852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.415 qpair failed and we were unable to recover it. 00:29:20.415 [2024-12-09 12:04:28.234199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.415 [2024-12-09 12:04:28.234228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.415 qpair failed and we were unable to recover it. 00:29:20.415 [2024-12-09 12:04:28.234492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.415 [2024-12-09 12:04:28.234520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.415 qpair failed and we were unable to recover it. 00:29:20.415 [2024-12-09 12:04:28.234898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.415 [2024-12-09 12:04:28.234928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.415 qpair failed and we were unable to recover it. 00:29:20.415 [2024-12-09 12:04:28.235302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.415 [2024-12-09 12:04:28.235331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.415 qpair failed and we were unable to recover it. 00:29:20.415 [2024-12-09 12:04:28.235661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.415 [2024-12-09 12:04:28.235692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.415 qpair failed and we were unable to recover it. 00:29:20.415 [2024-12-09 12:04:28.235915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.415 [2024-12-09 12:04:28.235943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.415 qpair failed and we were unable to recover it. 00:29:20.415 [2024-12-09 12:04:28.236324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.415 [2024-12-09 12:04:28.236355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.415 qpair failed and we were unable to recover it. 00:29:20.415 [2024-12-09 12:04:28.236735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.415 [2024-12-09 12:04:28.236767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.415 qpair failed and we were unable to recover it. 00:29:20.415 [2024-12-09 12:04:28.237154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.415 [2024-12-09 12:04:28.237182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.415 qpair failed and we were unable to recover it. 00:29:20.415 [2024-12-09 12:04:28.237552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.415 [2024-12-09 12:04:28.237583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.415 qpair failed and we were unable to recover it. 00:29:20.415 [2024-12-09 12:04:28.237805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.415 [2024-12-09 12:04:28.237835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.415 qpair failed and we were unable to recover it. 00:29:20.415 [2024-12-09 12:04:28.238202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.415 [2024-12-09 12:04:28.238231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.415 qpair failed and we were unable to recover it. 00:29:20.415 [2024-12-09 12:04:28.238468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.415 [2024-12-09 12:04:28.238496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.415 qpair failed and we were unable to recover it. 00:29:20.415 [2024-12-09 12:04:28.238891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.415 [2024-12-09 12:04:28.238924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.415 qpair failed and we were unable to recover it. 00:29:20.415 [2024-12-09 12:04:28.239285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.415 [2024-12-09 12:04:28.239314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.415 qpair failed and we were unable to recover it. 00:29:20.415 [2024-12-09 12:04:28.239674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.415 [2024-12-09 12:04:28.239706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.415 qpair failed and we were unable to recover it. 00:29:20.415 [2024-12-09 12:04:28.240085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.415 [2024-12-09 12:04:28.240114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.415 qpair failed and we were unable to recover it. 00:29:20.415 [2024-12-09 12:04:28.240449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.415 [2024-12-09 12:04:28.240478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.415 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.240758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.240789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.241108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.241136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.241503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.241532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.241923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.241953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.242298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.242326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.242427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.242456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199d0c0 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.242927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.243039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.243403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.243439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.243701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.243735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.243981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.244015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.244408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.244438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.244909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.245015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.245474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.245511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.245919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.245952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.246310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.246341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.246724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.246756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.246914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.246943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.247213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.247248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.247604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.247635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.248062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.248092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.248393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.248425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.248650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.248682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.248910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.248943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.249338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.249369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.249736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.249767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.250014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.250047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.250430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.250461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.250827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.250858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.251091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.251120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.251531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.251561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.251988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.252018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.252362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.252398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.252756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.252786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.253012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.253041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.253413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.253442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.253733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.253763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.254137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.254166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-12-09 12:04:28.254519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.416 [2024-12-09 12:04:28.254549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.254768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.254798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.255179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.255208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.255446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.255479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.255812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.255845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.256195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.256225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.256587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.256617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.256885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.256915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.257279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.257310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.257529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.257558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.257912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.257944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.258164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.258194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.258423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.258457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.258822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.258853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.259220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.259249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.259583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.259612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.259842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.259872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.260107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.260136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.260509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.260541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.260895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.260927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.261163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.261197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.261573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.261604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.261731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.261764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.261983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.262013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.262352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.262381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.262613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.262656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.263001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.263031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.263388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.263422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.263785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.263816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.264051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.264080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.264310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.264343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.264695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.264726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.264976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.265009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.265373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.265403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.265780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.265820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.266091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.266122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.266466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.266496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.266869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.266900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.267252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.267282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-12-09 12:04:28.267496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.417 [2024-12-09 12:04:28.267525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.418 [2024-12-09 12:04:28.267908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.418 [2024-12-09 12:04:28.267939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.418 qpair failed and we were unable to recover it. 00:29:20.418 [2024-12-09 12:04:28.268207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.418 [2024-12-09 12:04:28.268240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.418 qpair failed and we were unable to recover it. 00:29:20.692 [2024-12-09 12:04:28.268615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.692 [2024-12-09 12:04:28.268654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.692 qpair failed and we were unable to recover it. 00:29:20.692 [2024-12-09 12:04:28.268979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.692 [2024-12-09 12:04:28.269009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.692 qpair failed and we were unable to recover it. 00:29:20.692 [2024-12-09 12:04:28.269128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.692 [2024-12-09 12:04:28.269158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.692 qpair failed and we were unable to recover it. 00:29:20.692 [2024-12-09 12:04:28.269544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.692 [2024-12-09 12:04:28.269573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.692 qpair failed and we were unable to recover it. 00:29:20.692 [2024-12-09 12:04:28.269938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.692 [2024-12-09 12:04:28.269968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.692 qpair failed and we were unable to recover it. 00:29:20.692 [2024-12-09 12:04:28.270329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.692 [2024-12-09 12:04:28.270359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.692 qpair failed and we were unable to recover it. 00:29:20.692 [2024-12-09 12:04:28.270748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.692 [2024-12-09 12:04:28.270779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.692 qpair failed and we were unable to recover it. 00:29:20.692 [2024-12-09 12:04:28.270998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.692 [2024-12-09 12:04:28.271027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.692 qpair failed and we were unable to recover it. 00:29:20.692 [2024-12-09 12:04:28.271412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.692 [2024-12-09 12:04:28.271443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.692 qpair failed and we were unable to recover it. 00:29:20.692 [2024-12-09 12:04:28.271792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.692 [2024-12-09 12:04:28.271824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.692 qpair failed and we were unable to recover it. 00:29:20.692 [2024-12-09 12:04:28.272193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.692 [2024-12-09 12:04:28.272222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.692 qpair failed and we were unable to recover it. 00:29:20.692 [2024-12-09 12:04:28.272596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.692 [2024-12-09 12:04:28.272625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.692 qpair failed and we were unable to recover it. 00:29:20.692 [2024-12-09 12:04:28.273001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.692 [2024-12-09 12:04:28.273031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.692 qpair failed and we were unable to recover it. 00:29:20.692 [2024-12-09 12:04:28.273383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.692 [2024-12-09 12:04:28.273413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.273753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.273783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.274157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.274186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.274564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.274594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.274956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.274989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.275239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.275271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.275655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.275688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.275915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.275944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.276315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.276346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.276722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.276752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.276966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.276995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.277376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.277405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.277744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.277776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.278052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.278081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.278455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.278487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.278879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.278909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.279125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.279154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.279557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.279587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.280005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.280038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.280380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.280417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.280805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.280836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.280945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.280976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.281078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.281107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.281442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.281471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.281842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.281874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.282084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.282114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.282361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.282390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.282754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.282784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.283157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.283185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.283572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.283601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.283964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.283998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.284215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.284245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.284585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.284616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.284984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.285015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.285346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.285377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.285618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.285663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.286037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.286066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.286399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.286430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.286658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.693 [2024-12-09 12:04:28.286689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.693 qpair failed and we were unable to recover it. 00:29:20.693 [2024-12-09 12:04:28.287054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.287085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.287456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.287485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.287833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.287863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.288247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.288276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.288659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.288689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.289077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.289105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.289437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.289465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.289690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.289720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.290128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.290157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.290532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.290560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.290767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.290797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.291143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.291171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.291537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.291565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.291804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.291834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.292084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.292112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.292482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.292511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.292739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.292769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.293171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.293203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.293566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.293597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.293974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.294005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.294229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.294264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.294653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.294684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.295040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.295069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.295430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.295461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.295853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.295883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.296135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.296164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.296510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.296539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.296910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.296939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.297178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.297207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.297427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.297456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.297834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.297864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.298129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.298161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.298539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.298571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.298928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.298961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.299218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.299248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.299616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.299656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.299921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.299950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.300321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.300351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.300593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.694 [2024-12-09 12:04:28.300622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.694 qpair failed and we were unable to recover it. 00:29:20.694 [2024-12-09 12:04:28.300988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.301018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.301363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.301394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.301765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.301797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.302167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.302197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.302570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.302599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.302984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.303015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.303243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.303272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.303515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.303548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.303954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.303986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.304340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.304370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.304546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.304575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.304890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.304919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.305285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.305316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.305695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.305725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.306120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.306151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.306519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.306549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.306867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.306903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.307286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.307315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.307666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.307696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.308065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.308094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.308357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.308387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.308764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.308794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.309182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.309212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.309536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.309564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.309884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.309917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.310164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.310193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.310541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.310572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.310918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.310949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.311150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.311178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.311588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.311618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.311974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.312005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.312369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.312398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.312780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.312811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.313039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.313068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.313439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.313467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.313793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.313833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.314220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.314249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.314628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.314667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.315001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.695 [2024-12-09 12:04:28.315030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.695 qpair failed and we were unable to recover it. 00:29:20.695 [2024-12-09 12:04:28.315397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.315425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.315771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.315803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.316175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.316204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.316583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.316612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.316992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.317021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.317401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.317430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.317784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.317816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.318186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.318215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.318586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.318615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.319030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.319076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.319449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.319479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.319855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.319884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.320129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.320158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.320507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.320537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.320795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.320825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.321217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.321245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.321614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.321651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.321997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.322027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.322302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.322331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.322680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.322711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.322955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.322984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.323309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.323337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.323697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.323733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.324106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.324135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.324383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.324411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.324768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.324798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.325161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.325190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.325561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.325589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.325742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.325772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.326151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.326179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.326558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.326587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.326988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.327018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.327320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.327349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.327718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.327749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.327994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.328023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.328365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.328393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.328769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.328799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.329184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.329213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.696 [2024-12-09 12:04:28.329607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.696 [2024-12-09 12:04:28.329636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.696 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.330020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.330049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.330406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.330436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.330827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.330857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.331250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.331280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.331663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.331694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.332052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.332082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.332456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.332485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.332700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.332730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.333111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.333140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.333448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.333478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.333700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.333737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.334092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.334121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.334483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.334512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.334857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.334888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.335104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.335133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.335502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.335531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.335939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.335969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.336219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.336248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.336658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.336688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.336957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.336989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.337252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.337282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.337633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.337673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.338035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.338064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.338464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.338493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.338719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.338749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.338993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.339021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.339433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.339461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.339796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.339825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.340009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.340039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.340266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.340295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.340521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.340549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.340942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.340971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.341198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.697 [2024-12-09 12:04:28.341227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.697 qpair failed and we were unable to recover it. 00:29:20.697 [2024-12-09 12:04:28.341440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.341472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.341819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.341848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.342126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.342154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.342369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.342397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.342759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.342790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.343181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.343209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.343585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.343613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.343870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.343900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.344277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.344306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.344599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.344628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.344850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.344880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.345112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.345140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.345488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.345518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.345746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.345776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.346142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.346171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.346537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.346566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.346799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.346829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.347099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.347133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.347506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.347535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.347754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.347784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.347950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.347979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.348357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.348385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.348612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.348648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.348998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.349028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.349399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.349428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.349769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.349798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.350101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.350129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.350372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.350400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.350609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.350660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.351028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.351056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.351444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.351474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.351827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.351859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.352108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.352136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.352505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.352535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.352903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.352933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.353296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.353325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.353569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.353597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.353983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.354013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.698 [2024-12-09 12:04:28.354246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.698 [2024-12-09 12:04:28.354274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.698 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.354719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.354749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.355000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.355029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.355276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.355306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.355546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.355575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.355934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.355965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.356304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.356334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.356706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.356736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.357115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.357144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.357357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.357386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.357784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.357819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.358241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.358272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.358675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.358706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.359067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.359096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.359204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.359234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.359627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.359667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.359837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.359866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.360261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.360291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.360505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.360535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.360764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.360800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.361054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.361083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.361331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.361360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.361580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.361609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.362032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.362063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.362399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.362429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.362685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.362717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.363091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.363120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.363500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.363530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.363762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.363792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.364185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.364215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.364606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.364634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.364886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.364920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.365172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.365201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.365322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.365351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.365627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.365670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.366062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.366091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.366430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.366458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.366811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.366842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.367201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.367232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.699 [2024-12-09 12:04:28.367630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.699 [2024-12-09 12:04:28.367672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.699 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.367918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.367947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.368333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.368362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.368457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.368484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.368711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.368741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.369090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.369120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.369470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.369498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.369787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.369817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.370050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.370078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.370469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.370499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.370871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.370901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.371152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.371186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.371552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.371582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.371960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.371991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.372219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.372248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.372654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.372685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.373050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.373080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.373295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.373324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.373573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.373602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.373867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.373897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.374126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.374162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.374574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.374603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.375035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.375065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.375284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.375312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.375683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.375713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.376102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.376132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.376490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.376519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.376754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.376783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.377106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.377135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.377498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.377527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.377789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.377820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.378181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.378210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.378536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.378566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.378677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.378707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.378953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.378984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.379348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.379379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.700 qpair failed and we were unable to recover it. 00:29:20.700 [2024-12-09 12:04:28.379777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.700 [2024-12-09 12:04:28.379808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.380212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.380241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.380479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.380508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.380717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.380747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.381010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.381039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.381247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.381276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.381510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.381550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.381909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.381939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.382312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.382341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.382681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.382711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.382920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.382949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.383167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.383196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.383611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.383667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.383897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.383929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.384305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.384335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.384678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.384708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.385016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.385045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.385432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.385461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.385815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.385846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.386081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.386110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.386324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.386355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.386731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.386787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.387185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.387214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.387589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.387617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.387986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.388022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.388347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.388377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.388593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.388621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.389046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.389075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.389434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.389464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.389832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.389861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.390228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.390257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.390606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.390634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.391033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.391062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.391285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.701 [2024-12-09 12:04:28.391316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.701 qpair failed and we were unable to recover it. 00:29:20.701 [2024-12-09 12:04:28.391703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.391734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.392076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.392105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.392315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.392344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.392598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.392628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.392883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.392913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.393278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.393308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.393528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.393557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.393901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.393932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.394144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.394174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.394381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.394411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.394744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.394773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.395022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.395050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.395300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.395328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.395750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.395780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.396126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.396154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.396543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.396571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.396951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.396981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.397360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.397389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.397748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.397779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.398044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.398072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.398414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.398443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.398802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.398832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.399188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.399217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.399445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.399473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.399814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.399843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.400213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.400241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.400621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.400660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.401016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.401045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.401485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.401515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.401862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.401892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.402203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.402238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.402574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.402602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.402987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.403017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.403384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.403412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.403654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.403684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.404050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.404079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.404439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.404467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.404739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.404769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.405148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.702 [2024-12-09 12:04:28.405178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.702 qpair failed and we were unable to recover it. 00:29:20.702 [2024-12-09 12:04:28.405525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.405553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.405947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.405977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.406356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.406385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.406724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.406754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.407113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.407142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.407534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.407563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.407787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.407817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.408187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.408215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.408427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.408456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.408872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.408903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.409272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.409302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.409530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.409559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.409943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.409973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.410154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.410183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.410394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.410425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.410679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.410708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.411071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.411100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.411423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.411452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.411803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.411835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.412071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.412099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.412452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.412481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.412848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.412878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.413264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.413293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.413674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.413705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.414100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.414129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.414268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.414296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.414701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.414731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.415080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.415108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.415487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.415515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.415865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.415896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.416178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.416208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.416598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.416634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.417006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.417037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.417347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.417377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.417719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.417749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.418138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.418168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.418534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.418562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.418789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.418818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.419154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.703 [2024-12-09 12:04:28.419184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.703 qpair failed and we were unable to recover it. 00:29:20.703 [2024-12-09 12:04:28.419555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.419583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.419955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.419984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.420356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.420385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.420721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.420752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.421116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.421144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.421379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.421408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.421731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.421762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.422127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.422157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.422460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.422490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.422871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.422900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.423157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.423185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.423527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.423556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.423878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.423907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.424144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.424172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.424509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.424539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.424634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.424670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.424766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.424793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.425133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.425162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.425508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.425538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.425916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.425946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.426296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.426326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.426550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.426582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.426955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.426985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.427356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.427385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.427784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.427814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.428181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.428209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.428437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.428466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.428715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.428745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.429116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.429144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.429515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.429544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.429906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.429936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.430293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.430321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.430552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.430586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.430862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.430898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.431244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.431275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.431646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.431677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.432012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.432041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.432144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.432174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.432530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.704 [2024-12-09 12:04:28.432559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.704 qpair failed and we were unable to recover it. 00:29:20.704 [2024-12-09 12:04:28.432776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.432806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.433207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.433236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.433520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.433548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.433941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.433972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.434333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.434361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.434736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.434765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.435067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.435096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.435467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.435496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.435862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.435891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.436270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.436299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.436522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.436550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.436918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.436949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.437183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.437216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.437434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.437464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.437733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.437767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.438150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.438181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.438420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.438449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.438800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.438829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.439196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.439226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.439590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.439619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.439994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.440025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.440328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.440358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.440590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.440623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.440957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.440987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.441382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.441412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.441787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.441819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.442151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.442180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.442406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.442437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.442555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.442589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.442996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.443027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.443270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.443299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.443551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.443584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.443970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.444000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.444336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.444372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.444723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.444753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.445147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.445175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.445521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.445549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.445790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.445820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.446195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.705 [2024-12-09 12:04:28.446223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.705 qpair failed and we were unable to recover it. 00:29:20.705 [2024-12-09 12:04:28.446465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.446494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.446820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.446850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.447084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.447113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.447518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.447548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.447763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.447794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.448142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.448171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.448504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.448534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.448770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.448806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.449174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.449204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.449581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.449610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.450000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.450030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.450399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.450428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.450812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.450843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.451219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.451248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.451614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.451651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.451992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.452021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.452283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.452316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.452413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.452441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.452701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.452730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.453107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.453137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.453493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.453522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.453775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.453806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.454020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.454048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.454436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.454464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.454839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.454869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.455241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.455270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.455654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.455684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.456052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.456080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.456440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.456468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.456708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.456738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.457130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.457159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.457425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.457456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.457833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.457862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.458159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.458189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.458530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.706 [2024-12-09 12:04:28.458565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.706 qpair failed and we were unable to recover it. 00:29:20.706 [2024-12-09 12:04:28.458777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.458809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.459162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.459191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.459560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.459590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.459983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.460013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.460221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.460249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.460594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.460624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.460984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.461013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.461372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.461401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.461756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.461786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.462023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.462051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.462420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.462449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.462813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.462844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.463210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.463239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.463625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.463663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.463885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.463913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.464303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.464332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.464710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.464742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.464973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.465002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.465222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.465251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.465628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.465666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.465908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.465936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.466163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.466192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.466432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.466465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.466816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.466846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.467222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.467251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.467619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.467658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.468066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.468096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.468447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.468478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.468857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.468888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.469224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.469253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.469623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.469660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.470014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.470044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.470401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.470431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.470756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.470788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.471199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.471230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.471593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.471622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.471994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.472025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.472380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.472411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.472803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.707 [2024-12-09 12:04:28.472833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.707 qpair failed and we were unable to recover it. 00:29:20.707 [2024-12-09 12:04:28.473193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.473229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.473578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.473607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.473981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.474011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.474279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.474308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.474562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.474590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.474993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.475025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.475279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.475311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.475689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.475722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.476105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.476135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.476500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.476529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.476800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.476829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.477076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.477105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.477333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.477361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.477658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.477689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.477944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.477973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.478086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.478118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.478493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.478523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.478819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.478848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.479210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.479239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.479562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.479593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.479940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.479970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.480341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.480370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.480721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.480752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.481123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.481151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.481526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.481555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.481929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.481959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.482345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.482374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.482726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.482756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.482901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.482930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.483292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.483321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.483676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.483707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.484060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.484089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.484303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.484333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.484542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.484572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.484829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.484860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.485109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.485138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.485368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.485400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.485786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.485816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.485988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.708 [2024-12-09 12:04:28.486018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.708 qpair failed and we were unable to recover it. 00:29:20.708 [2024-12-09 12:04:28.486222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.486262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.486589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.486618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.487060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.487091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.487332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.487360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.487624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.487676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.487921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.487954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.488277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.488307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.488684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.488715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.489049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.489077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.489437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.489466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.489883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.489914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.490130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.490159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.490529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.490559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.490857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.490887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.491107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.491137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.491489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.491520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.491886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.491916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.492272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.492302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.492518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.492550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.492794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.492824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.493161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.493191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.493483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.493511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.493891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.493920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.494275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.494305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.494566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.494597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.494822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.494851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.495230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.495260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.495631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.495683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.496015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.496052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.496300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.496328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.496607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.496648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.497035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.497064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.497424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.497453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.497784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.497815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.498038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.498067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.498348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.498376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.498768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.498799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.499162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.499191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.499566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.709 [2024-12-09 12:04:28.499595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.709 qpair failed and we were unable to recover it. 00:29:20.709 [2024-12-09 12:04:28.499934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.499966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.500328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.500358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.500715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.500746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.501085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.501115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.501472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.501503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.501736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.501767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.502105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.502176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.502427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.502456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.502851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.502881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.503238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.503266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.503655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.503687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.504033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.504061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.504432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.504460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.504811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.504840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.505232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.505261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.505654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.505684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.506035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.506066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.506296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.506325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.506700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.506734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.507073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.507102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.507455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.507484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.507710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.507739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.508143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.508172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.508377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.508405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.508773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.508803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.509013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.509042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.509337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.509367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.509723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.509752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.510118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.510149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.510383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.510418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.510799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.510831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.511182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.511211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.511555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.511583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.511832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.511862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.710 qpair failed and we were unable to recover it. 00:29:20.710 [2024-12-09 12:04:28.512253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.710 [2024-12-09 12:04:28.512281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.512635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.512672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.513044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.513074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.513300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.513328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.513575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.513604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.513994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.514024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.514417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.514446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.514871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.514902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.515266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.515294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.515652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.515682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.516036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.516066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.516442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.516473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.516824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.516853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.517234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.517263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.517624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.517661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.518039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.518072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.518438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.518467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.518855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.518889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.519224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.519263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.519562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.519593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.519952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.519985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.520348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.520377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.520751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.520785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.521160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.521190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.521537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.521566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.521945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.521975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.522212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.522241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.522501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.522534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.522880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.522911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.523291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.523320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.523675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.523704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.524157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.524186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.524541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.524571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.524955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.524986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.525340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.525369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.525720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.525756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.526128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.526157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.526515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.526545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.526811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.526841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.711 qpair failed and we were unable to recover it. 00:29:20.711 [2024-12-09 12:04:28.527225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.711 [2024-12-09 12:04:28.527256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.527477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.527507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.527872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.527901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.528272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.528300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.528399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.528427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.528750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.528779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.528985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.529013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.529262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.529295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.529657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.529686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.529844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.529873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.530286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.530316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.530536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.530568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.530798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.530829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.531169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.531199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.531559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.531588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.531824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.531854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.532216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.532246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.532574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.532605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.532830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.532864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.533103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.533131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.533482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.533514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.533891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.533921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.534274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.534303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.534751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.534783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.535154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.535183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.535445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.535475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.535687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.535717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.536085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.536114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.536339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.536370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.536745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.536775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.537158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.537190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.537547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.537578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.537939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.537970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.538214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.538242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06bc000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.538674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.538735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.539067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.539078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.539420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.539433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.539864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.539914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.540281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.540293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.712 qpair failed and we were unable to recover it. 00:29:20.712 [2024-12-09 12:04:28.540635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.712 [2024-12-09 12:04:28.540652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.540951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.540998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.541274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.541285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.541604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.541613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.541964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.541972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.542287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.542297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.542649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.542657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.542850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.542857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.543195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.543204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.543397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.543405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.543716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.543724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.544053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.544068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.544377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.544385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.544681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.544693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.545043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.545053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.545444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.545454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.545650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.545659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.545983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.545992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.546330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.546338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.546648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.546657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.546993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.547002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.547329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.547337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.547648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.547656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.548037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.548048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.548221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.548231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.548491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.548499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.548820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.548829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.549171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.549179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.549526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.549537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.549715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.549725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.550074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.550083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.550385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.550394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.550722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.550731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.551172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.551181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.551505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.551514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.551694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.551710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.552049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.552058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.552258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.552271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.552466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.552475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.713 [2024-12-09 12:04:28.552818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.713 [2024-12-09 12:04:28.552828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.713 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.553154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.553165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.553494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.553503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.553788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.553796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.554137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.554147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.554493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.554500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.554809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.554817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.555164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.555171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.555477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.555485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.555576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.555584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.555884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.555892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.556090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.556099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.556384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.556393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.556727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.556735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.557075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.557083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.557432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.557440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.557750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.557760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.558089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.558102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.558293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.558304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.558696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.558705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.559082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.559091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.559432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.559440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.559615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.559623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.560037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.560047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.560376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.560384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.560679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.560687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.560735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.560741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.561062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.561070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.561397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.561405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.561715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.561723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.562108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.562118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.562452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.562460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.562789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.562798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.563084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.563094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.563328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.563337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.714 [2024-12-09 12:04:28.563769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.714 [2024-12-09 12:04:28.563780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.714 qpair failed and we were unable to recover it. 00:29:20.715 [2024-12-09 12:04:28.563968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.715 [2024-12-09 12:04:28.563976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.715 qpair failed and we were unable to recover it. 00:29:20.715 [2024-12-09 12:04:28.564320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.715 [2024-12-09 12:04:28.564328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.715 qpair failed and we were unable to recover it. 00:29:20.991 [2024-12-09 12:04:28.564689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.991 [2024-12-09 12:04:28.564705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.991 qpair failed and we were unable to recover it. 00:29:20.991 [2024-12-09 12:04:28.564886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.991 [2024-12-09 12:04:28.564895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.991 qpair failed and we were unable to recover it. 00:29:20.991 [2024-12-09 12:04:28.565256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.991 [2024-12-09 12:04:28.565265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.991 qpair failed and we were unable to recover it. 00:29:20.991 [2024-12-09 12:04:28.565618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.991 [2024-12-09 12:04:28.565627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.991 qpair failed and we were unable to recover it. 00:29:20.991 [2024-12-09 12:04:28.565812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.991 [2024-12-09 12:04:28.565823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.991 qpair failed and we were unable to recover it. 00:29:20.991 [2024-12-09 12:04:28.566165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.991 [2024-12-09 12:04:28.566174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.991 qpair failed and we were unable to recover it. 00:29:20.991 [2024-12-09 12:04:28.566399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.566408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.566740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.566749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.566947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.566956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.567186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.567195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.567546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.567557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.567921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.567930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.568256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.568265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.568617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.568626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.568962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.568972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.569304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.569313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.569647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.569658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.569846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.569855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.570194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.570203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.570524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.570534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.570886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.570897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.571231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.571243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.571433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.571442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.571752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.571762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.572105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.572114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.572439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.572448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.572734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.572743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.573062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.573070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.573364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.573372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.573666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.573675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.574044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.574054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.574391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.574401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.574717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.574725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.575028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.575036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.575379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.575386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.575668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.575677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.575981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.575988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.576280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.576288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.576570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.576579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.576860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.576869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.577205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.577215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.577570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.577579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.577816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.577824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.578018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.578026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.992 [2024-12-09 12:04:28.578225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.992 [2024-12-09 12:04:28.578233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.992 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.578532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.578540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.578849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.578857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.579057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.579066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.579375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.579384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.579718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.579726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.580092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.580100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.580454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.580462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.580728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.580737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.580788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.580795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.581038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.581047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.581249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.581257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.581440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.581447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.581620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.581627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.581958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.581966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.582299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.582307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.582605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.582613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.582928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.582936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.583232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.583241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.583542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.583550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.583777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.583785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.584000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.584008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.584293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.584301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.584629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.584642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.584950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.584957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.585135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.585144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.585307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.585315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.585552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.585560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.585766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.585776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.586125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.586134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.586449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.586457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.586663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.586671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.587019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.587027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.587135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.587143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.587444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.993 [2024-12-09 12:04:28.587452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.993 qpair failed and we were unable to recover it. 00:29:20.993 [2024-12-09 12:04:28.587786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.587794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.588021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.588035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.588240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.588249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.588535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.588542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.588946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.588954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.589262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.589270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.589443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.589451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.589854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.589863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.590080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.590089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.590373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.590381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.590702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.590710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.591066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.591073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.591251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.591260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.591561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.591570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.591816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.591823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.592124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.592132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.592470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.592480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.592690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.592698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.592958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.592965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.593272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.593281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.593331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.593339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.593656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.593665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.593886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.593893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.594183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.594191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.594541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.594549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.594717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.594725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.595139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.595147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.595321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.595330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.595684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.595692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.595991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.595999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.596330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.596338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.994 [2024-12-09 12:04:28.596646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.994 [2024-12-09 12:04:28.596655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.994 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.596861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.596869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.597060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.597067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.597335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.597344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.597663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.597671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.597979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.597986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.598318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.598330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.598552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.598561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.598885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.598896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.599221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.599230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.599434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.599443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.599749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.599758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.600100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.600108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.600301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.600309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.600645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.600655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.600848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.600855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.601047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.601053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.601230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.601241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.601427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.601435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.601706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.601713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.601907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.601915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.602253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.602268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.602483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.602490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.602654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.602661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.602950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.602958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.603242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.603250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.603552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.603561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.603929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.603938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.604236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.604244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.604525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.604533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.604748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.604756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.605111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.605118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.605464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.605474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.605773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.605781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.606089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.606096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.606394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.606402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.606726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.606734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.607074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.607083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.607438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.995 [2024-12-09 12:04:28.607449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.995 qpair failed and we were unable to recover it. 00:29:20.995 [2024-12-09 12:04:28.607795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.607803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.608191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.608198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.608432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.608440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.608802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.608810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.609108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.609116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.609440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.609447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.609753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.609761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.610079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.610088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.610367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.610375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.610709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.610718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.611031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.611040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.611385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.611396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.611577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.611586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.612013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.612021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.612235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.612243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.612296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.612303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.612651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.612660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.613018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.613026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.613268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.613276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.613615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.613625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.613836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.613844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.614128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.614137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.614484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.614492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.614535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.614543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.614848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.614858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.615161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.615170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.615380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.615388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.615585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.615595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.615936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.615945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.616271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.616280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.616453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.616463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.616816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.616825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.617185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.996 [2024-12-09 12:04:28.617199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.996 qpair failed and we were unable to recover it. 00:29:20.996 [2024-12-09 12:04:28.617453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.617460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.617801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.617809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.618141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.618149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.618479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.618487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.618532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.618539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.618852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.618860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.619165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.619173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.619506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.619515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.619715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.619723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.620069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.620077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.620422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.620431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.620797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.620805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.621145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.621154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.621348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.621356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.621677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.621686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.622022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.622030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.622349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.622356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.622570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.622577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.622747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.622757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.623143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.623150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.623516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.623525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.623812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.623821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.624135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.624145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.624468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.624476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.624672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.624682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.624997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.625005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.625334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.625341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.625650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.625660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.625822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.625833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.626081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.626089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.626395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.626402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.626590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.626598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.626902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.626910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.627214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.627223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.627411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.627420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.627827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.627835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.628023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.628032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.628203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.628210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.628555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.997 [2024-12-09 12:04:28.628566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.997 qpair failed and we were unable to recover it. 00:29:20.997 [2024-12-09 12:04:28.628877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.628887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.629211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.629220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.629652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.629660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.629746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.629753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.629949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.629959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.630290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.630301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.630505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.630513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.630866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.630877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.630955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.630964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.631281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.631292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.631618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.631627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.631972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.631981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.632180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.632189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.632522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.632532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.632703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.632710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.633007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.633017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.633198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.633207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.633514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.633524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.633715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.633724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.633829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.633840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.634098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.634107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.634438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.634447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.634774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.634783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.635128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.635138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.635479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.635488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.635840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.635850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.636064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.636072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.636370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.636378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.636703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.636712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.636814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.636823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.637041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.637050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.637388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.637397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.637736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.637747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.638062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.638070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.638271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.638280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.638639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.638648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.638836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.638844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.639059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.639069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.639446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.639454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.998 qpair failed and we were unable to recover it. 00:29:20.998 [2024-12-09 12:04:28.639739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.998 [2024-12-09 12:04:28.639748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.640098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.640107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.640432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.640440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.640791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.640800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.640977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.640987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.641284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.641291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.641703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.641711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.642022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.642030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.642089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.642096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.642420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.642430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.642759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.642768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.642845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.642853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.643004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.643012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.643347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.643355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.643405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.643414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.643718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.643727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.644066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.644075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.644390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.644400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.644450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.644458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.644737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.644746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.644977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.644990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.645329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.645339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.645675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.645684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.645975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.645983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.646305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.646313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.646558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.646567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.646907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.646916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.647116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.647124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.647459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.647467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.647794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.647802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.647980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.647988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.648231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.648240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.648569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.648577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.648951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.648960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.649290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.649300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.649607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.649617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.649951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.649961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.650384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.650392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.650741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.650749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:20.999 qpair failed and we were unable to recover it. 00:29:20.999 [2024-12-09 12:04:28.650933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.999 [2024-12-09 12:04:28.650942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.651277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.651285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.651469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.651480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.651782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.651791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.652141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.652149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.652475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.652483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.652693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.652702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.653070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.653078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.653253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.653264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.653456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.653467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.653655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.653666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.653863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.653871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.654202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.654219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.654550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.654558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.654781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.654790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.655130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.655137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.655334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.655344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.655537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.655545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.655750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.655760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.656074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.656082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.656269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.656278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.656461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.656470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.656793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.656802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.657124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.657140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.657448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.657456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.657740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.657749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.657924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.657933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.658147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.658154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.658516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.658526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.658823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.658831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.659187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.659195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.659493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.659502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.659780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.659789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.659963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.659971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.660307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.660315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.660507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.660516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.660831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.660841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.661214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.661223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.000 qpair failed and we were unable to recover it. 00:29:21.000 [2024-12-09 12:04:28.661579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.000 [2024-12-09 12:04:28.661588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.661634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.661649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.661845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.661854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.662181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.662189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.662390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.662399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.662553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.662562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.662809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.662817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.663061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.663070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.663460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.663469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.663786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.663794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.664114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.664125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.664446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.664454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.664636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.664650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.664866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.664874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.665203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.665211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.665383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.665391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.665568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.665578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.665811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.665821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.666153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.666161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.666341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.666350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.666625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.666635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.666960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.666970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.667146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.667154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.667551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.667559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.667909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.667919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.668194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.668203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.668503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.668512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.668809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.668817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.669141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.669159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.669486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.669494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.669812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.669821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.670152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.670163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.670341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.670349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.670518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.670527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.670573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.670581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.001 [2024-12-09 12:04:28.670866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.001 [2024-12-09 12:04:28.670874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.001 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.671166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.671175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.671475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.671483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.671859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.671868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.672191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.672201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.672545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.672555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.672759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.672768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.673158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.673168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.673480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.673490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.673669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.673679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.674018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.674026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.674208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.674216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.674597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.674606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.674939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.674950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.675114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.675123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.675459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.675469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.675798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.675806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.676150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.676158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.676458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.676467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.676652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.676660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.676958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.676967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.677250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.677260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.677575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.677584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.677956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.677965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.678254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.678263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.678585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.678593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.678910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.678919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.679244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.679252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.679577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.679587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.679950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.679959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.680135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.680143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.680456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.680465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.680628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.680636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.680956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.680965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.681278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.681287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.681598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.681609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.681884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.681893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.682261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.682269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.682592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.682601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.682917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.002 [2024-12-09 12:04:28.682924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.002 qpair failed and we were unable to recover it. 00:29:21.002 [2024-12-09 12:04:28.683281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.683290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.683516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.683524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.683714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.683722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.683930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.683939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.684108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.684116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.684437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.684445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.684635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.684649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.684955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.684962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.685288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.685295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.685472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.685478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.685802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.685809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.686150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.686157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.686449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.686459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.686624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.686632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.686965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.686972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.687266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.687275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.687449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.687457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.687653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.687661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.687943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.687950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.688276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.688283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.688618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.688626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.688952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.688960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.689234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.689241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.689439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.689447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.689776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.689783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.690124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.690131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.690426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.690435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.690746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.690754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.690933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.690939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.691273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.691280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.691581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.691588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.691911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.691918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.692195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.692202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.692372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.692379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.692656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.692664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.692935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.692944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.693264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.693274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.693568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.693577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.693905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.003 [2024-12-09 12:04:28.693914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.003 qpair failed and we were unable to recover it. 00:29:21.003 [2024-12-09 12:04:28.694089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.694096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.694409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.694417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.694661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.694670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.694958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.694965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.695294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.695303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.695634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.695657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.695993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.696000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.696349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.696356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.696702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.696713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.697003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.697011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.697326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.697333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.697654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.697661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.697959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.697966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.698250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.698257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.698437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.698445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.698766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.698776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.699099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.699108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.699394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.699401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.699785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.699793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.700119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.700126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.700346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.700353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.700565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.700572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.700852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.700860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.701199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.701207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.701526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.701534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.701889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.701897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.701984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.701991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.702266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.702274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.702562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.702572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.702896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.702905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.703099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.703107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.703467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.703475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.703649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.703658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.703976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.703983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.704175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.704182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.704489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.704497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.704837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.704846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.705062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.705069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.004 [2024-12-09 12:04:28.705246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.004 [2024-12-09 12:04:28.705254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.004 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.705583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.705590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.705931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.705947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.706261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.706268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.706483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.706491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.706786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.706793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.707142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.707150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.707365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.707372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.707547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.707553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.707939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.707946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.708208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.708215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.708520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.708527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.708734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.708742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.709087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.709095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.709300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.709308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.709509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.709518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.709853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.709861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.709906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.709913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.710179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.710188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.710360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.710368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.710645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.710652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.710828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.710834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.711042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.711049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.711342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.711350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.711535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.711542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.711817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.711825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.712031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.712038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.712369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.712377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.712559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.712567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.712875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.712883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.713163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.713170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.713544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.713554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.713743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.005 [2024-12-09 12:04:28.713751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.005 qpair failed and we were unable to recover it. 00:29:21.005 [2024-12-09 12:04:28.713966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.713973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.714368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.714376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.714508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.714516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.714837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.714847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.715185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.715194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.715547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.715555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.715874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.715883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.716194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.716203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.716519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.716528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.716855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.716864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.717283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.717292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.717604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.717613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.717931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.717940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.718220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.718229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.718413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.718421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.718736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.718743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.719043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.719050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.719388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.719395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.719580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.719589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:21.006 [2024-12-09 12:04:28.719785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.719794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.720064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.720072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:21.006 [2024-12-09 12:04:28.720393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.720400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:21.006 [2024-12-09 12:04:28.720659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.720667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:21.006 [2024-12-09 12:04:28.720952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.720960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.006 [2024-12-09 12:04:28.721259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.721267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.721408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.721415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.721700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.721710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.721966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.721975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.722283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.722293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.722613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.722621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.722931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.722945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.722997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.723004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.723185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.723194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.723522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.723533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.723696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.723704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.723941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.723949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.724270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.006 [2024-12-09 12:04:28.724278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.006 qpair failed and we were unable to recover it. 00:29:21.006 [2024-12-09 12:04:28.724459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.724467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.724663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.724671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.724858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.724865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.725179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.725188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.725513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.725520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.725859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.725870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.726212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.726228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.726505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.726513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.726685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.726694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.726970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.726977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.727188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.727196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.727411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.727419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.727769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.727777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.728084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.728093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.728426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.728433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.728706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.728717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.729032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.729040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.729346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.729354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.729401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.729409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.729563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.729573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.729847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.729855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.730177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.730186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.730513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.730522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.730724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.730732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.731001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.731010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.731200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.731207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.731535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.731545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.731850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.731858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.732201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.732209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.732534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.732542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.732823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.732834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.733171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.733179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.733466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.733474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.733796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.733803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.734141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.734148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.734471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.734479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.734816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.734827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.735024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.735033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.007 qpair failed and we were unable to recover it. 00:29:21.007 [2024-12-09 12:04:28.735324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.007 [2024-12-09 12:04:28.735333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.735517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.735526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.735883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.735892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.736065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.736072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.736351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.736359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.736546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.736555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.736854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.736861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.737182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.737191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.737500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.737510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.737804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.737812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.738111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.738119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.738436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.738447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.738757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.738765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.739132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.739140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.739458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.739467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.739790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.739798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.740068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.740077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.740369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.740378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.740690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.740699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.740888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.740896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.741237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.741245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.741568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.741576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.741877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.741884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.742174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.742182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.742514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.742522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.742824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.742833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.743204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.743214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.743513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.743521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.743856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.743867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.744047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.744055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.744392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.744400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.744712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.744720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.744952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.744959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.745295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.745303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.745488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.745498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.745800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.745808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.746009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.746017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.746382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.746391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.746591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.746601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.008 qpair failed and we were unable to recover it. 00:29:21.008 [2024-12-09 12:04:28.746880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.008 [2024-12-09 12:04:28.746888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.747207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.747215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.747526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.747536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.747742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.747751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.747945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.747954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.748281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.748291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.748467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.748475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.748792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.748800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.748979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.748986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.749256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.749264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.749586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.749595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.749904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.749915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.750216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.750225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.750551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.750558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.750866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.750873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.751201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.751210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.751399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.751407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.751669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.751677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.751842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.751849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.752016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.752024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.752194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.752202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.752509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.752519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.752827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.752835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.753155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.753163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.753438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.753445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.753744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.753753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.754068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.754076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.754452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.754462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.754776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.754784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.755057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.755068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.755379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.755386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.755708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.755716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.755900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.755908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.756203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.756211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.756504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.756513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.756838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.756846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.009 [2024-12-09 12:04:28.757016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.009 [2024-12-09 12:04:28.757023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.009 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.757299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.757306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.757607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.757615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.757932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.757943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.758286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.758295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.758613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.758622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.758943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.758955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.759147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.759156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.759472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.759480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.759762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.759770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.760121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.760128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.760314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.760322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.760607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.760616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.760950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.760958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.761251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.761260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.761441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.761448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.761760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.761769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.762093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.762100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.762407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.762424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.762738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.762745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.763042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.763049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.763349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.763359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:21.010 [2024-12-09 12:04:28.763684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.763694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:21.010 [2024-12-09 12:04:28.763879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.763889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.764060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.764068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.010 [2024-12-09 12:04:28.764262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.764273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.010 [2024-12-09 12:04:28.764541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.764552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.764714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.764724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.765004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.765012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.765290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.765299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.765481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.765492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.765792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.765803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.765982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.765989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.766269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.010 [2024-12-09 12:04:28.766275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.010 qpair failed and we were unable to recover it. 00:29:21.010 [2024-12-09 12:04:28.766601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.766608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.766933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.766941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.767212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.767219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.767552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.767561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.767863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.767872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.768190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.768197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.768515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.768522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.768751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.768758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.769087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.769094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.769405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.769413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.769750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.769759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.769950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.769959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.770254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.770264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.770580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.770588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.770868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.770876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.771073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.771081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.771405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.771412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.771616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.771623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.771987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.771995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.772169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.772176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.772487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.772495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.772721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.772729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.773097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.773104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.773372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.773379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.773688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.773695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.774126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.774133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.774452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.774461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.774650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.774658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.774896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.774904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.775209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.775217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.775535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.775542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.775850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.775858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.776165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.776172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.776462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.776471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.776681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.776688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.776903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.776910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.777220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.777228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.777588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.777598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.777924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.011 [2024-12-09 12:04:28.777931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.011 qpair failed and we were unable to recover it. 00:29:21.011 [2024-12-09 12:04:28.778253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.778260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.778555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.778563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.778843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.778851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.779208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.779217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.779565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.779573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.779850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.779859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.780179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.780187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.780386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.780394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.780708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.780715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.781024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.781032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.781375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.781383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.781562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.781569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.781765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.781773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.782119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.782126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.782295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.782302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.782577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.782584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.782880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.782888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.783176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.783183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.783472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.783481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.783650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.783658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.783893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.783900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.784203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.784211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.784409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.784417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.784592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.784600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.784649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.784656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.784946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.784954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.785138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.785145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.785459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.785467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.785755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.785763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.785952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.785959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.786291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.786298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.786625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.786632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.786925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.786932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.787275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.787282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.787596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.787603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.787926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.787933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.788246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.788254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.788530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.012 [2024-12-09 12:04:28.788538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.012 qpair failed and we were unable to recover it. 00:29:21.012 [2024-12-09 12:04:28.788883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.788895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.789208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.789216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.789533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.789541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.789823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.789832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.790017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.790026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.790351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.790360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.790549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.790556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.790878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.790887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.791207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.791214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.791552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.791567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.791879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.791887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.792215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.792223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.792567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.792577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.792744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.792752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.792932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.792940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.793263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.793272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.793344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.793352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.793669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.793679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.794023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.794031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.794233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.794241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.794594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.794602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.794901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.794911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.795203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.795211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.795378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.795386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.795561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.795569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.795725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.795733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.796038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.796045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.796214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.796222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.796544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.796552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 Malloc0 00:29:21.013 [2024-12-09 12:04:28.796830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.796838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.797177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.797186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.797484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.797501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.013 [2024-12-09 12:04:28.797809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.797817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:21.013 [2024-12-09 12:04:28.798027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.798035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.013 [2024-12-09 12:04:28.798356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.798364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.798551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.798559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.013 [2024-12-09 12:04:28.798799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.798808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.013 qpair failed and we were unable to recover it. 00:29:21.013 [2024-12-09 12:04:28.799120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.013 [2024-12-09 12:04:28.799128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.799321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.799331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.799528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.799537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.799850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.799859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.800185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.800192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.800508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.800515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.800799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.800806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.801098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.801105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.801472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.801482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.801804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.801812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.802110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.802117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.802296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.802304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.802466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.802473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.802799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.802806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.803010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.803017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.803344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.803351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.803533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.803541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.803716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.803724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.804061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.804068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.804115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.804121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.804277] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:21.014 [2024-12-09 12:04:28.804419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.804427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.804641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.804648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.804839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.804847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.805178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.805185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.805472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.805479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.805734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.805741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.805990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.805999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.806242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.806249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.806570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.806578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.806898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.806906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.807108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.807115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.807492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.807499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.807763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.807770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.808061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.808068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.014 [2024-12-09 12:04:28.808238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.014 [2024-12-09 12:04:28.808247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.014 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.808438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.808445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.808754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.808762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.808929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.808936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.809255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.809261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.809588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.809595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.809904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.809911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.810199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.810209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.810593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.810602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.810874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.810881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.811207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.811215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.811524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.811532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.811835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.811842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.812095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.812102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.812433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.812442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.812770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.812777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.813049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.813056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.015 [2024-12-09 12:04:28.813396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.813404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:21.015 [2024-12-09 12:04:28.813693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.813701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.015 [2024-12-09 12:04:28.814036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.814047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.015 [2024-12-09 12:04:28.814220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.814228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.814577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.814584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.814886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.814893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.815230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.815238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.815527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.815535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.815731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.815740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.816060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.816067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.816361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.816368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.816542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.816549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.816871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.816878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.817186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.817193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.817499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.817507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.817855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.817865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.818154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.818161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.818338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.818345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.818574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.818581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.818929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.818936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.015 qpair failed and we were unable to recover it. 00:29:21.015 [2024-12-09 12:04:28.819263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.015 [2024-12-09 12:04:28.819270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.819593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.819602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.819907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.819915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.820320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.820327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.820507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.820514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.820810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.820818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.821035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.821042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.821266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.821273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.821600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.821610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.821787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.821795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.822104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.822111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.822321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.822329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.822657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.822665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.822952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.822958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.823123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.823130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.823405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.823412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.823714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.823721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.824062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.824071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.824246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.824253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.824596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.824604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.824881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.824889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.825166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.825174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.016 [2024-12-09 12:04:28.825497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.825506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:21.016 [2024-12-09 12:04:28.825826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.825835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.826006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.826015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.016 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.016 [2024-12-09 12:04:28.826346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.826355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.826572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.826581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.826831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.826840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.827122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.827130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.827174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.827182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.827527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.827535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.827701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.827709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.827983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.827991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.828307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.828318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.828643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.828652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.828811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.828819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.829134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.829143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.016 qpair failed and we were unable to recover it. 00:29:21.016 [2024-12-09 12:04:28.829452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.016 [2024-12-09 12:04:28.829460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.829673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.829682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.829956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.829964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.830235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.830243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.830426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.830435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.830738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.830747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.830822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.830830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.830910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.830917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.831241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.831250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.831417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.831425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.831615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.831623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.831817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.831825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.832143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.832151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.832469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.832478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.832656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.832665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.832853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.832861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.833055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.833064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.833107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.833115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.833401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.833409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.833717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.833726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.834058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.834066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.834341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.834349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.834565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.834574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.834803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.834811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.835103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.835111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.835495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.835503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.835816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.835825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.836160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.836168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.836344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.836352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.836720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.836729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.836893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.836901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.836970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.836978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.837168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.837177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.017 [2024-12-09 12:04:28.837488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.837498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.837700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.837708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:21.017 [2024-12-09 12:04:28.837906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.837915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 [2024-12-09 12:04:28.837999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.838006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.017 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.017 [2024-12-09 12:04:28.838307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.017 [2024-12-09 12:04:28.838315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.017 qpair failed and we were unable to recover it. 00:29:21.018 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.018 [2024-12-09 12:04:28.838649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.018 [2024-12-09 12:04:28.838657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.018 [2024-12-09 12:04:28.838825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.018 [2024-12-09 12:04:28.838833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.018 [2024-12-09 12:04:28.839221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.018 [2024-12-09 12:04:28.839229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.018 [2024-12-09 12:04:28.839438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.018 [2024-12-09 12:04:28.839445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.018 [2024-12-09 12:04:28.839705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.018 [2024-12-09 12:04:28.839712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.018 [2024-12-09 12:04:28.839782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.018 [2024-12-09 12:04:28.839790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.018 [2024-12-09 12:04:28.840115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.018 [2024-12-09 12:04:28.840122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.018 [2024-12-09 12:04:28.840428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.018 [2024-12-09 12:04:28.840435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.018 [2024-12-09 12:04:28.840762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.018 [2024-12-09 12:04:28.840770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.018 [2024-12-09 12:04:28.840956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.018 [2024-12-09 12:04:28.840963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.018 [2024-12-09 12:04:28.841245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.018 [2024-12-09 12:04:28.841252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.018 [2024-12-09 12:04:28.841601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.018 [2024-12-09 12:04:28.841608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.018 [2024-12-09 12:04:28.841846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.018 [2024-12-09 12:04:28.841854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.018 [2024-12-09 12:04:28.842173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.018 [2024-12-09 12:04:28.842180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.018 [2024-12-09 12:04:28.842345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.018 [2024-12-09 12:04:28.842352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.018 [2024-12-09 12:04:28.842589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.018 [2024-12-09 12:04:28.842596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.018 [2024-12-09 12:04:28.842798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.018 [2024-12-09 12:04:28.842806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.018 [2024-12-09 12:04:28.843136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.018 [2024-12-09 12:04:28.843143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.018 [2024-12-09 12:04:28.843309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.018 [2024-12-09 12:04:28.843316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.018 [2024-12-09 12:04:28.843594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.018 [2024-12-09 12:04:28.843601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.018 [2024-12-09 12:04:28.843932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.018 [2024-12-09 12:04:28.843941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.018 [2024-12-09 12:04:28.843987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.018 [2024-12-09 12:04:28.843994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.018 [2024-12-09 12:04:28.844296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.018 [2024-12-09 12:04:28.844303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f06b4000b90 with addr=10.0.0.2, port=4420 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.018 [2024-12-09 12:04:28.844543] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:21.018 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.018 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:21.018 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.018 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.018 [2024-12-09 12:04:28.855283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.018 [2024-12-09 12:04:28.855391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.018 [2024-12-09 12:04:28.855407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.018 [2024-12-09 12:04:28.855413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.018 [2024-12-09 12:04:28.855418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.018 [2024-12-09 12:04:28.855434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.018 qpair failed and we were unable to recover it. 00:29:21.281 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.281 12:04:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 234827 00:29:21.281 [2024-12-09 12:04:28.865046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.281 [2024-12-09 12:04:28.865105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.281 [2024-12-09 12:04:28.865116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.281 [2024-12-09 12:04:28.865121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.281 [2024-12-09 12:04:28.865126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.281 [2024-12-09 12:04:28.865138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.281 qpair failed and we were unable to recover it. 00:29:21.281 [2024-12-09 12:04:28.875200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.281 [2024-12-09 12:04:28.875253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.281 [2024-12-09 12:04:28.875264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.281 [2024-12-09 12:04:28.875270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.281 [2024-12-09 12:04:28.875274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.281 [2024-12-09 12:04:28.875285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.281 qpair failed and we were unable to recover it. 00:29:21.281 [2024-12-09 12:04:28.885180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.281 [2024-12-09 12:04:28.885240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.281 [2024-12-09 12:04:28.885253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.281 [2024-12-09 12:04:28.885259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.281 [2024-12-09 12:04:28.885263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.281 [2024-12-09 12:04:28.885274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.281 qpair failed and we were unable to recover it. 00:29:21.281 [2024-12-09 12:04:28.895165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.281 [2024-12-09 12:04:28.895221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.281 [2024-12-09 12:04:28.895232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.281 [2024-12-09 12:04:28.895237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.281 [2024-12-09 12:04:28.895241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.281 [2024-12-09 12:04:28.895251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.281 qpair failed and we were unable to recover it. 00:29:21.281 [2024-12-09 12:04:28.905167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.281 [2024-12-09 12:04:28.905222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.281 [2024-12-09 12:04:28.905232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.281 [2024-12-09 12:04:28.905237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.281 [2024-12-09 12:04:28.905242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.281 [2024-12-09 12:04:28.905252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.281 qpair failed and we were unable to recover it. 00:29:21.281 [2024-12-09 12:04:28.915183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.281 [2024-12-09 12:04:28.915271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.281 [2024-12-09 12:04:28.915281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.281 [2024-12-09 12:04:28.915286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.281 [2024-12-09 12:04:28.915290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.281 [2024-12-09 12:04:28.915300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.281 qpair failed and we were unable to recover it. 00:29:21.281 [2024-12-09 12:04:28.925094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.281 [2024-12-09 12:04:28.925148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.281 [2024-12-09 12:04:28.925158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.281 [2024-12-09 12:04:28.925163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.281 [2024-12-09 12:04:28.925171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.281 [2024-12-09 12:04:28.925181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.281 qpair failed and we were unable to recover it. 00:29:21.281 [2024-12-09 12:04:28.935285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.282 [2024-12-09 12:04:28.935355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.282 [2024-12-09 12:04:28.935365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.282 [2024-12-09 12:04:28.935370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.282 [2024-12-09 12:04:28.935375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.282 [2024-12-09 12:04:28.935385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.282 qpair failed and we were unable to recover it. 00:29:21.282 [2024-12-09 12:04:28.945285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.282 [2024-12-09 12:04:28.945338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.282 [2024-12-09 12:04:28.945348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.282 [2024-12-09 12:04:28.945352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.282 [2024-12-09 12:04:28.945357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.282 [2024-12-09 12:04:28.945367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.282 qpair failed and we were unable to recover it. 00:29:21.282 [2024-12-09 12:04:28.955294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.282 [2024-12-09 12:04:28.955338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.282 [2024-12-09 12:04:28.955348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.282 [2024-12-09 12:04:28.955354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.282 [2024-12-09 12:04:28.955358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.282 [2024-12-09 12:04:28.955368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.282 qpair failed and we were unable to recover it. 00:29:21.282 [2024-12-09 12:04:28.965183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.282 [2024-12-09 12:04:28.965238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.282 [2024-12-09 12:04:28.965248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.282 [2024-12-09 12:04:28.965253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.282 [2024-12-09 12:04:28.965257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.282 [2024-12-09 12:04:28.965267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.282 qpair failed and we were unable to recover it. 00:29:21.282 [2024-12-09 12:04:28.975357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.282 [2024-12-09 12:04:28.975410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.282 [2024-12-09 12:04:28.975419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.282 [2024-12-09 12:04:28.975424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.282 [2024-12-09 12:04:28.975428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.282 [2024-12-09 12:04:28.975439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.282 qpair failed and we were unable to recover it. 00:29:21.282 [2024-12-09 12:04:28.985367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.282 [2024-12-09 12:04:28.985418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.282 [2024-12-09 12:04:28.985437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.282 [2024-12-09 12:04:28.985443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.282 [2024-12-09 12:04:28.985448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.282 [2024-12-09 12:04:28.985461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.282 qpair failed and we were unable to recover it. 00:29:21.282 [2024-12-09 12:04:28.995381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.282 [2024-12-09 12:04:28.995436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.282 [2024-12-09 12:04:28.995447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.282 [2024-12-09 12:04:28.995452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.282 [2024-12-09 12:04:28.995457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.282 [2024-12-09 12:04:28.995467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.282 qpair failed and we were unable to recover it. 00:29:21.282 [2024-12-09 12:04:29.005437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.282 [2024-12-09 12:04:29.005490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.282 [2024-12-09 12:04:29.005500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.282 [2024-12-09 12:04:29.005505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.282 [2024-12-09 12:04:29.005510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.282 [2024-12-09 12:04:29.005520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.282 qpair failed and we were unable to recover it. 00:29:21.282 [2024-12-09 12:04:29.015376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.282 [2024-12-09 12:04:29.015427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.282 [2024-12-09 12:04:29.015456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.282 [2024-12-09 12:04:29.015462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.282 [2024-12-09 12:04:29.015466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.282 [2024-12-09 12:04:29.015483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.282 qpair failed and we were unable to recover it. 00:29:21.282 [2024-12-09 12:04:29.025455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.282 [2024-12-09 12:04:29.025508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.282 [2024-12-09 12:04:29.025518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.282 [2024-12-09 12:04:29.025523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.282 [2024-12-09 12:04:29.025527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.282 [2024-12-09 12:04:29.025538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.282 qpair failed and we were unable to recover it. 00:29:21.282 [2024-12-09 12:04:29.035535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.282 [2024-12-09 12:04:29.035614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.282 [2024-12-09 12:04:29.035624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.282 [2024-12-09 12:04:29.035629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.282 [2024-12-09 12:04:29.035633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.282 [2024-12-09 12:04:29.035647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.282 qpair failed and we were unable to recover it. 00:29:21.282 [2024-12-09 12:04:29.045543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.282 [2024-12-09 12:04:29.045595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.282 [2024-12-09 12:04:29.045604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.282 [2024-12-09 12:04:29.045609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.282 [2024-12-09 12:04:29.045613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.282 [2024-12-09 12:04:29.045623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.282 qpair failed and we were unable to recover it. 00:29:21.282 [2024-12-09 12:04:29.055629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.282 [2024-12-09 12:04:29.055692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.282 [2024-12-09 12:04:29.055702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.282 [2024-12-09 12:04:29.055706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.282 [2024-12-09 12:04:29.055715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.282 [2024-12-09 12:04:29.055726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.282 qpair failed and we were unable to recover it. 00:29:21.282 [2024-12-09 12:04:29.065659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.282 [2024-12-09 12:04:29.065712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.282 [2024-12-09 12:04:29.065721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.282 [2024-12-09 12:04:29.065726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.283 [2024-12-09 12:04:29.065731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.283 [2024-12-09 12:04:29.065741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.283 qpair failed and we were unable to recover it. 00:29:21.283 [2024-12-09 12:04:29.075668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.283 [2024-12-09 12:04:29.075716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.283 [2024-12-09 12:04:29.075725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.283 [2024-12-09 12:04:29.075730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.283 [2024-12-09 12:04:29.075735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.283 [2024-12-09 12:04:29.075745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.283 qpair failed and we were unable to recover it. 00:29:21.283 [2024-12-09 12:04:29.085687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.283 [2024-12-09 12:04:29.085738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.283 [2024-12-09 12:04:29.085747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.283 [2024-12-09 12:04:29.085752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.283 [2024-12-09 12:04:29.085756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.283 [2024-12-09 12:04:29.085766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.283 qpair failed and we were unable to recover it. 00:29:21.283 [2024-12-09 12:04:29.095694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.283 [2024-12-09 12:04:29.095781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.283 [2024-12-09 12:04:29.095790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.283 [2024-12-09 12:04:29.095795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.283 [2024-12-09 12:04:29.095800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.283 [2024-12-09 12:04:29.095810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.283 qpair failed and we were unable to recover it. 00:29:21.283 [2024-12-09 12:04:29.105692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.283 [2024-12-09 12:04:29.105742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.283 [2024-12-09 12:04:29.105751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.283 [2024-12-09 12:04:29.105756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.283 [2024-12-09 12:04:29.105760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.283 [2024-12-09 12:04:29.105770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.283 qpair failed and we were unable to recover it. 00:29:21.283 [2024-12-09 12:04:29.115705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.283 [2024-12-09 12:04:29.115768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.283 [2024-12-09 12:04:29.115777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.283 [2024-12-09 12:04:29.115782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.283 [2024-12-09 12:04:29.115786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.283 [2024-12-09 12:04:29.115796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.283 qpair failed and we were unable to recover it. 00:29:21.283 [2024-12-09 12:04:29.125751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.283 [2024-12-09 12:04:29.125804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.283 [2024-12-09 12:04:29.125813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.283 [2024-12-09 12:04:29.125818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.283 [2024-12-09 12:04:29.125822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.283 [2024-12-09 12:04:29.125833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.283 qpair failed and we were unable to recover it. 00:29:21.283 [2024-12-09 12:04:29.135793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.283 [2024-12-09 12:04:29.135847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.283 [2024-12-09 12:04:29.135856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.283 [2024-12-09 12:04:29.135861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.283 [2024-12-09 12:04:29.135865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.283 [2024-12-09 12:04:29.135875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.283 qpair failed and we were unable to recover it. 00:29:21.283 [2024-12-09 12:04:29.145798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.283 [2024-12-09 12:04:29.145848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.283 [2024-12-09 12:04:29.145860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.283 [2024-12-09 12:04:29.145865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.283 [2024-12-09 12:04:29.145869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.283 [2024-12-09 12:04:29.145879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.283 qpair failed and we were unable to recover it. 00:29:21.283 [2024-12-09 12:04:29.155816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.283 [2024-12-09 12:04:29.155867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.283 [2024-12-09 12:04:29.155877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.283 [2024-12-09 12:04:29.155882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.283 [2024-12-09 12:04:29.155886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.283 [2024-12-09 12:04:29.155896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.283 qpair failed and we were unable to recover it. 00:29:21.546 [2024-12-09 12:04:29.165852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.546 [2024-12-09 12:04:29.165903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.546 [2024-12-09 12:04:29.165912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.546 [2024-12-09 12:04:29.165917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.546 [2024-12-09 12:04:29.165922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.546 [2024-12-09 12:04:29.165931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.546 qpair failed and we were unable to recover it. 00:29:21.546 [2024-12-09 12:04:29.175763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.546 [2024-12-09 12:04:29.175814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.546 [2024-12-09 12:04:29.175825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.546 [2024-12-09 12:04:29.175830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.546 [2024-12-09 12:04:29.175835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.546 [2024-12-09 12:04:29.175846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.546 qpair failed and we were unable to recover it. 00:29:21.546 [2024-12-09 12:04:29.185918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.546 [2024-12-09 12:04:29.185963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.546 [2024-12-09 12:04:29.185973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.546 [2024-12-09 12:04:29.185981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.546 [2024-12-09 12:04:29.185986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.546 [2024-12-09 12:04:29.185996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.546 qpair failed and we were unable to recover it. 00:29:21.546 [2024-12-09 12:04:29.195921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.546 [2024-12-09 12:04:29.195965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.546 [2024-12-09 12:04:29.195975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.546 [2024-12-09 12:04:29.195979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.546 [2024-12-09 12:04:29.195984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.546 [2024-12-09 12:04:29.195994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.546 qpair failed and we were unable to recover it. 00:29:21.546 [2024-12-09 12:04:29.205964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.546 [2024-12-09 12:04:29.206017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.546 [2024-12-09 12:04:29.206027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.546 [2024-12-09 12:04:29.206031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.546 [2024-12-09 12:04:29.206036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.546 [2024-12-09 12:04:29.206046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.546 qpair failed and we were unable to recover it. 00:29:21.546 [2024-12-09 12:04:29.215975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.546 [2024-12-09 12:04:29.216026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.546 [2024-12-09 12:04:29.216036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.546 [2024-12-09 12:04:29.216041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.546 [2024-12-09 12:04:29.216045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.546 [2024-12-09 12:04:29.216056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.546 qpair failed and we were unable to recover it. 00:29:21.546 [2024-12-09 12:04:29.226009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.546 [2024-12-09 12:04:29.226057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.546 [2024-12-09 12:04:29.226066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.546 [2024-12-09 12:04:29.226071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.546 [2024-12-09 12:04:29.226075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.546 [2024-12-09 12:04:29.226087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.546 qpair failed and we were unable to recover it. 00:29:21.546 [2024-12-09 12:04:29.236049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.546 [2024-12-09 12:04:29.236132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.546 [2024-12-09 12:04:29.236142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.546 [2024-12-09 12:04:29.236147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.546 [2024-12-09 12:04:29.236151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.546 [2024-12-09 12:04:29.236161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.546 qpair failed and we were unable to recover it. 00:29:21.546 [2024-12-09 12:04:29.246081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.546 [2024-12-09 12:04:29.246155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.546 [2024-12-09 12:04:29.246165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.546 [2024-12-09 12:04:29.246170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.546 [2024-12-09 12:04:29.246174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.546 [2024-12-09 12:04:29.246184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.546 qpair failed and we were unable to recover it. 00:29:21.546 [2024-12-09 12:04:29.256089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.546 [2024-12-09 12:04:29.256140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.546 [2024-12-09 12:04:29.256150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.546 [2024-12-09 12:04:29.256155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.546 [2024-12-09 12:04:29.256159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.546 [2024-12-09 12:04:29.256168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.546 qpair failed and we were unable to recover it. 00:29:21.546 [2024-12-09 12:04:29.266131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.546 [2024-12-09 12:04:29.266175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.546 [2024-12-09 12:04:29.266184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.546 [2024-12-09 12:04:29.266189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.546 [2024-12-09 12:04:29.266194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.546 [2024-12-09 12:04:29.266203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.546 qpair failed and we were unable to recover it. 00:29:21.546 [2024-12-09 12:04:29.276161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.546 [2024-12-09 12:04:29.276213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.546 [2024-12-09 12:04:29.276223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.546 [2024-12-09 12:04:29.276227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.546 [2024-12-09 12:04:29.276232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.546 [2024-12-09 12:04:29.276242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.546 qpair failed and we were unable to recover it. 00:29:21.547 [2024-12-09 12:04:29.286187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.547 [2024-12-09 12:04:29.286234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.547 [2024-12-09 12:04:29.286244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.547 [2024-12-09 12:04:29.286248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.547 [2024-12-09 12:04:29.286253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.547 [2024-12-09 12:04:29.286262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.547 qpair failed and we were unable to recover it. 00:29:21.547 [2024-12-09 12:04:29.296096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.547 [2024-12-09 12:04:29.296144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.547 [2024-12-09 12:04:29.296154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.547 [2024-12-09 12:04:29.296159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.547 [2024-12-09 12:04:29.296164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.547 [2024-12-09 12:04:29.296174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.547 qpair failed and we were unable to recover it. 00:29:21.547 [2024-12-09 12:04:29.306247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.547 [2024-12-09 12:04:29.306290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.547 [2024-12-09 12:04:29.306299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.547 [2024-12-09 12:04:29.306304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.547 [2024-12-09 12:04:29.306309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.547 [2024-12-09 12:04:29.306318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.547 qpair failed and we were unable to recover it. 00:29:21.547 [2024-12-09 12:04:29.316241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.547 [2024-12-09 12:04:29.316289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.547 [2024-12-09 12:04:29.316298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.547 [2024-12-09 12:04:29.316306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.547 [2024-12-09 12:04:29.316310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.547 [2024-12-09 12:04:29.316320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.547 qpair failed and we were unable to recover it. 00:29:21.547 [2024-12-09 12:04:29.326306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.547 [2024-12-09 12:04:29.326361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.547 [2024-12-09 12:04:29.326371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.547 [2024-12-09 12:04:29.326376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.547 [2024-12-09 12:04:29.326380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.547 [2024-12-09 12:04:29.326390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.547 qpair failed and we were unable to recover it. 00:29:21.547 [2024-12-09 12:04:29.336356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.547 [2024-12-09 12:04:29.336440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.547 [2024-12-09 12:04:29.336449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.547 [2024-12-09 12:04:29.336454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.547 [2024-12-09 12:04:29.336459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.547 [2024-12-09 12:04:29.336468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.547 qpair failed and we were unable to recover it. 00:29:21.547 [2024-12-09 12:04:29.346368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.547 [2024-12-09 12:04:29.346424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.547 [2024-12-09 12:04:29.346442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.547 [2024-12-09 12:04:29.346448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.547 [2024-12-09 12:04:29.346453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.547 [2024-12-09 12:04:29.346468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.547 qpair failed and we were unable to recover it. 00:29:21.547 [2024-12-09 12:04:29.356406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.547 [2024-12-09 12:04:29.356462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.547 [2024-12-09 12:04:29.356480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.547 [2024-12-09 12:04:29.356487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.547 [2024-12-09 12:04:29.356491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.547 [2024-12-09 12:04:29.356509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.547 qpair failed and we were unable to recover it. 00:29:21.547 [2024-12-09 12:04:29.366421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.547 [2024-12-09 12:04:29.366500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.547 [2024-12-09 12:04:29.366511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.547 [2024-12-09 12:04:29.366516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.547 [2024-12-09 12:04:29.366521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.547 [2024-12-09 12:04:29.366532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.547 qpair failed and we were unable to recover it. 00:29:21.547 [2024-12-09 12:04:29.376461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.547 [2024-12-09 12:04:29.376513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.547 [2024-12-09 12:04:29.376523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.547 [2024-12-09 12:04:29.376528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.547 [2024-12-09 12:04:29.376533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.547 [2024-12-09 12:04:29.376543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.547 qpair failed and we were unable to recover it. 00:29:21.547 [2024-12-09 12:04:29.386481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.547 [2024-12-09 12:04:29.386524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.547 [2024-12-09 12:04:29.386534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.547 [2024-12-09 12:04:29.386539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.547 [2024-12-09 12:04:29.386543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.547 [2024-12-09 12:04:29.386553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.547 qpair failed and we were unable to recover it. 00:29:21.547 [2024-12-09 12:04:29.396505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.547 [2024-12-09 12:04:29.396553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.547 [2024-12-09 12:04:29.396563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.547 [2024-12-09 12:04:29.396568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.547 [2024-12-09 12:04:29.396572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.547 [2024-12-09 12:04:29.396582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.547 qpair failed and we were unable to recover it. 00:29:21.547 [2024-12-09 12:04:29.406544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.547 [2024-12-09 12:04:29.406591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.547 [2024-12-09 12:04:29.406601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.547 [2024-12-09 12:04:29.406606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.547 [2024-12-09 12:04:29.406610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.547 [2024-12-09 12:04:29.406620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.547 qpair failed and we were unable to recover it. 00:29:21.547 [2024-12-09 12:04:29.416588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.547 [2024-12-09 12:04:29.416643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.547 [2024-12-09 12:04:29.416653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.548 [2024-12-09 12:04:29.416658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.548 [2024-12-09 12:04:29.416662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.548 [2024-12-09 12:04:29.416672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.548 qpair failed and we were unable to recover it. 00:29:21.548 [2024-12-09 12:04:29.426465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.548 [2024-12-09 12:04:29.426511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.548 [2024-12-09 12:04:29.426520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.548 [2024-12-09 12:04:29.426525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.548 [2024-12-09 12:04:29.426529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.548 [2024-12-09 12:04:29.426540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.548 qpair failed and we were unable to recover it. 00:29:21.809 [2024-12-09 12:04:29.436619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.809 [2024-12-09 12:04:29.436669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.809 [2024-12-09 12:04:29.436679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.809 [2024-12-09 12:04:29.436684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.810 [2024-12-09 12:04:29.436689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.810 [2024-12-09 12:04:29.436699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-12-09 12:04:29.446657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.810 [2024-12-09 12:04:29.446711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.810 [2024-12-09 12:04:29.446723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.810 [2024-12-09 12:04:29.446728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.810 [2024-12-09 12:04:29.446732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.810 [2024-12-09 12:04:29.446742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-12-09 12:04:29.456682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.810 [2024-12-09 12:04:29.456731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.810 [2024-12-09 12:04:29.456741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.810 [2024-12-09 12:04:29.456746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.810 [2024-12-09 12:04:29.456750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.810 [2024-12-09 12:04:29.456760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-12-09 12:04:29.466700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.810 [2024-12-09 12:04:29.466748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.810 [2024-12-09 12:04:29.466758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.810 [2024-12-09 12:04:29.466763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.810 [2024-12-09 12:04:29.466767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.810 [2024-12-09 12:04:29.466778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-12-09 12:04:29.476722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.810 [2024-12-09 12:04:29.476769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.810 [2024-12-09 12:04:29.476779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.810 [2024-12-09 12:04:29.476783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.810 [2024-12-09 12:04:29.476788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.810 [2024-12-09 12:04:29.476798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-12-09 12:04:29.486745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.810 [2024-12-09 12:04:29.486796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.810 [2024-12-09 12:04:29.486806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.810 [2024-12-09 12:04:29.486810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.810 [2024-12-09 12:04:29.486817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.810 [2024-12-09 12:04:29.486828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-12-09 12:04:29.496806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.810 [2024-12-09 12:04:29.496877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.810 [2024-12-09 12:04:29.496887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.810 [2024-12-09 12:04:29.496892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.810 [2024-12-09 12:04:29.496896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.810 [2024-12-09 12:04:29.496906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-12-09 12:04:29.506848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.810 [2024-12-09 12:04:29.506893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.810 [2024-12-09 12:04:29.506902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.810 [2024-12-09 12:04:29.506907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.810 [2024-12-09 12:04:29.506911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.810 [2024-12-09 12:04:29.506921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-12-09 12:04:29.516837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.810 [2024-12-09 12:04:29.516880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.810 [2024-12-09 12:04:29.516890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.810 [2024-12-09 12:04:29.516895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.810 [2024-12-09 12:04:29.516899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.810 [2024-12-09 12:04:29.516909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-12-09 12:04:29.526871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.810 [2024-12-09 12:04:29.526922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.810 [2024-12-09 12:04:29.526931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.810 [2024-12-09 12:04:29.526936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.810 [2024-12-09 12:04:29.526940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.810 [2024-12-09 12:04:29.526950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-12-09 12:04:29.536880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.810 [2024-12-09 12:04:29.536929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.810 [2024-12-09 12:04:29.536939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.810 [2024-12-09 12:04:29.536944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.810 [2024-12-09 12:04:29.536948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.810 [2024-12-09 12:04:29.536958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-12-09 12:04:29.546930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.810 [2024-12-09 12:04:29.546975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.810 [2024-12-09 12:04:29.546984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.810 [2024-12-09 12:04:29.546989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.810 [2024-12-09 12:04:29.546993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.810 [2024-12-09 12:04:29.547003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-12-09 12:04:29.556960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.810 [2024-12-09 12:04:29.557003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.810 [2024-12-09 12:04:29.557012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.810 [2024-12-09 12:04:29.557017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.810 [2024-12-09 12:04:29.557021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.810 [2024-12-09 12:04:29.557031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-12-09 12:04:29.566982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.810 [2024-12-09 12:04:29.567034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.810 [2024-12-09 12:04:29.567044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.810 [2024-12-09 12:04:29.567048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.810 [2024-12-09 12:04:29.567052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.810 [2024-12-09 12:04:29.567062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-12-09 12:04:29.577028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.810 [2024-12-09 12:04:29.577077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.810 [2024-12-09 12:04:29.577089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.810 [2024-12-09 12:04:29.577094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.810 [2024-12-09 12:04:29.577098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.810 [2024-12-09 12:04:29.577108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-12-09 12:04:29.587003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.810 [2024-12-09 12:04:29.587052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.810 [2024-12-09 12:04:29.587061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.810 [2024-12-09 12:04:29.587066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.810 [2024-12-09 12:04:29.587071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.810 [2024-12-09 12:04:29.587081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-12-09 12:04:29.597065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.810 [2024-12-09 12:04:29.597151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.810 [2024-12-09 12:04:29.597161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.810 [2024-12-09 12:04:29.597165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.810 [2024-12-09 12:04:29.597170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.810 [2024-12-09 12:04:29.597180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-12-09 12:04:29.606971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.810 [2024-12-09 12:04:29.607027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.810 [2024-12-09 12:04:29.607037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.810 [2024-12-09 12:04:29.607042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.810 [2024-12-09 12:04:29.607046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.810 [2024-12-09 12:04:29.607056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.810 qpair failed and we were unable to recover it. 00:29:21.810 [2024-12-09 12:04:29.617111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.810 [2024-12-09 12:04:29.617159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.810 [2024-12-09 12:04:29.617169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.810 [2024-12-09 12:04:29.617174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.810 [2024-12-09 12:04:29.617181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.811 [2024-12-09 12:04:29.617191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-12-09 12:04:29.627156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.811 [2024-12-09 12:04:29.627208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.811 [2024-12-09 12:04:29.627218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.811 [2024-12-09 12:04:29.627223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.811 [2024-12-09 12:04:29.627227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.811 [2024-12-09 12:04:29.627237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-12-09 12:04:29.637180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.811 [2024-12-09 12:04:29.637254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.811 [2024-12-09 12:04:29.637264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.811 [2024-12-09 12:04:29.637269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.811 [2024-12-09 12:04:29.637274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.811 [2024-12-09 12:04:29.637284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-12-09 12:04:29.647236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.811 [2024-12-09 12:04:29.647289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.811 [2024-12-09 12:04:29.647298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.811 [2024-12-09 12:04:29.647303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.811 [2024-12-09 12:04:29.647307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.811 [2024-12-09 12:04:29.647316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-12-09 12:04:29.657253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.811 [2024-12-09 12:04:29.657298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.811 [2024-12-09 12:04:29.657308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.811 [2024-12-09 12:04:29.657312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.811 [2024-12-09 12:04:29.657317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.811 [2024-12-09 12:04:29.657327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-12-09 12:04:29.667274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.811 [2024-12-09 12:04:29.667320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.811 [2024-12-09 12:04:29.667330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.811 [2024-12-09 12:04:29.667335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.811 [2024-12-09 12:04:29.667339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.811 [2024-12-09 12:04:29.667349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-12-09 12:04:29.677266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.811 [2024-12-09 12:04:29.677318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.811 [2024-12-09 12:04:29.677328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.811 [2024-12-09 12:04:29.677332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.811 [2024-12-09 12:04:29.677337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.811 [2024-12-09 12:04:29.677347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.811 qpair failed and we were unable to recover it. 00:29:21.811 [2024-12-09 12:04:29.687323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.811 [2024-12-09 12:04:29.687371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.811 [2024-12-09 12:04:29.687380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.811 [2024-12-09 12:04:29.687385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.811 [2024-12-09 12:04:29.687389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:21.811 [2024-12-09 12:04:29.687399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.811 qpair failed and we were unable to recover it. 00:29:22.075 [2024-12-09 12:04:29.697353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.075 [2024-12-09 12:04:29.697408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.075 [2024-12-09 12:04:29.697418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.075 [2024-12-09 12:04:29.697423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.075 [2024-12-09 12:04:29.697427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.075 [2024-12-09 12:04:29.697437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.075 qpair failed and we were unable to recover it. 00:29:22.075 [2024-12-09 12:04:29.707374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.075 [2024-12-09 12:04:29.707420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.075 [2024-12-09 12:04:29.707431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.075 [2024-12-09 12:04:29.707436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.075 [2024-12-09 12:04:29.707441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.075 [2024-12-09 12:04:29.707451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.075 qpair failed and we were unable to recover it. 00:29:22.075 [2024-12-09 12:04:29.717422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.075 [2024-12-09 12:04:29.717503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.075 [2024-12-09 12:04:29.717512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.075 [2024-12-09 12:04:29.717517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.075 [2024-12-09 12:04:29.717521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.075 [2024-12-09 12:04:29.717531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.075 qpair failed and we were unable to recover it. 00:29:22.075 [2024-12-09 12:04:29.727441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.075 [2024-12-09 12:04:29.727492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.075 [2024-12-09 12:04:29.727501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.075 [2024-12-09 12:04:29.727506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.075 [2024-12-09 12:04:29.727510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.075 [2024-12-09 12:04:29.727520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.075 qpair failed and we were unable to recover it. 00:29:22.075 [2024-12-09 12:04:29.737460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.075 [2024-12-09 12:04:29.737513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.075 [2024-12-09 12:04:29.737523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.075 [2024-12-09 12:04:29.737528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.075 [2024-12-09 12:04:29.737532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.075 [2024-12-09 12:04:29.737542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.075 qpair failed and we were unable to recover it. 00:29:22.075 [2024-12-09 12:04:29.747500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.075 [2024-12-09 12:04:29.747547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.075 [2024-12-09 12:04:29.747557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.075 [2024-12-09 12:04:29.747568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.075 [2024-12-09 12:04:29.747572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.075 [2024-12-09 12:04:29.747582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.075 qpair failed and we were unable to recover it. 00:29:22.075 [2024-12-09 12:04:29.757486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.075 [2024-12-09 12:04:29.757580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.075 [2024-12-09 12:04:29.757589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.075 [2024-12-09 12:04:29.757594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.075 [2024-12-09 12:04:29.757598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.075 [2024-12-09 12:04:29.757608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.075 qpair failed and we were unable to recover it. 00:29:22.075 [2024-12-09 12:04:29.767549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.075 [2024-12-09 12:04:29.767600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.075 [2024-12-09 12:04:29.767610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.075 [2024-12-09 12:04:29.767615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.075 [2024-12-09 12:04:29.767619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.075 [2024-12-09 12:04:29.767629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.075 qpair failed and we were unable to recover it. 00:29:22.075 [2024-12-09 12:04:29.777574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.075 [2024-12-09 12:04:29.777625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.075 [2024-12-09 12:04:29.777635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.075 [2024-12-09 12:04:29.777642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.075 [2024-12-09 12:04:29.777647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.075 [2024-12-09 12:04:29.777657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.075 qpair failed and we were unable to recover it. 00:29:22.075 [2024-12-09 12:04:29.787608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.075 [2024-12-09 12:04:29.787661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.075 [2024-12-09 12:04:29.787670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.075 [2024-12-09 12:04:29.787675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.075 [2024-12-09 12:04:29.787680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.075 [2024-12-09 12:04:29.787692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.075 qpair failed and we were unable to recover it. 00:29:22.075 [2024-12-09 12:04:29.797613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.075 [2024-12-09 12:04:29.797668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.075 [2024-12-09 12:04:29.797677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.075 [2024-12-09 12:04:29.797682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.075 [2024-12-09 12:04:29.797687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.075 [2024-12-09 12:04:29.797697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.075 qpair failed and we were unable to recover it. 00:29:22.075 [2024-12-09 12:04:29.807668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.075 [2024-12-09 12:04:29.807722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.075 [2024-12-09 12:04:29.807731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.075 [2024-12-09 12:04:29.807736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.075 [2024-12-09 12:04:29.807740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.075 [2024-12-09 12:04:29.807750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.075 qpair failed and we were unable to recover it. 00:29:22.075 [2024-12-09 12:04:29.817702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.075 [2024-12-09 12:04:29.817751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.075 [2024-12-09 12:04:29.817761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.075 [2024-12-09 12:04:29.817765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.075 [2024-12-09 12:04:29.817770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.076 [2024-12-09 12:04:29.817780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.076 qpair failed and we were unable to recover it. 00:29:22.076 [2024-12-09 12:04:29.827739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.076 [2024-12-09 12:04:29.827810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.076 [2024-12-09 12:04:29.827819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.076 [2024-12-09 12:04:29.827824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.076 [2024-12-09 12:04:29.827828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.076 [2024-12-09 12:04:29.827838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.076 qpair failed and we were unable to recover it. 00:29:22.076 [2024-12-09 12:04:29.837745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.076 [2024-12-09 12:04:29.837798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.076 [2024-12-09 12:04:29.837808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.076 [2024-12-09 12:04:29.837813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.076 [2024-12-09 12:04:29.837817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.076 [2024-12-09 12:04:29.837828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.076 qpair failed and we were unable to recover it. 00:29:22.076 [2024-12-09 12:04:29.847803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.076 [2024-12-09 12:04:29.847851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.076 [2024-12-09 12:04:29.847860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.076 [2024-12-09 12:04:29.847865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.076 [2024-12-09 12:04:29.847869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.076 [2024-12-09 12:04:29.847879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.076 qpair failed and we were unable to recover it. 00:29:22.076 [2024-12-09 12:04:29.857817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.076 [2024-12-09 12:04:29.857864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.076 [2024-12-09 12:04:29.857873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.076 [2024-12-09 12:04:29.857878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.076 [2024-12-09 12:04:29.857883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.076 [2024-12-09 12:04:29.857892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.076 qpair failed and we were unable to recover it. 00:29:22.076 [2024-12-09 12:04:29.867786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.076 [2024-12-09 12:04:29.867839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.076 [2024-12-09 12:04:29.867848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.076 [2024-12-09 12:04:29.867853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.076 [2024-12-09 12:04:29.867857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.076 [2024-12-09 12:04:29.867867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.076 qpair failed and we were unable to recover it. 00:29:22.076 [2024-12-09 12:04:29.877853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.076 [2024-12-09 12:04:29.877896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.076 [2024-12-09 12:04:29.877906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.076 [2024-12-09 12:04:29.877913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.076 [2024-12-09 12:04:29.877918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.076 [2024-12-09 12:04:29.877928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.076 qpair failed and we were unable to recover it. 00:29:22.076 [2024-12-09 12:04:29.887761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.076 [2024-12-09 12:04:29.887811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.076 [2024-12-09 12:04:29.887821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.076 [2024-12-09 12:04:29.887826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.076 [2024-12-09 12:04:29.887830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.076 [2024-12-09 12:04:29.887840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.076 qpair failed and we were unable to recover it. 00:29:22.076 [2024-12-09 12:04:29.897932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.076 [2024-12-09 12:04:29.897982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.076 [2024-12-09 12:04:29.897991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.076 [2024-12-09 12:04:29.897996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.076 [2024-12-09 12:04:29.898000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.076 [2024-12-09 12:04:29.898010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.076 qpair failed and we were unable to recover it. 00:29:22.076 [2024-12-09 12:04:29.907936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.076 [2024-12-09 12:04:29.907981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.076 [2024-12-09 12:04:29.907991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.076 [2024-12-09 12:04:29.907995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.076 [2024-12-09 12:04:29.908000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.076 [2024-12-09 12:04:29.908010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.076 qpair failed and we were unable to recover it. 00:29:22.076 [2024-12-09 12:04:29.917979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.076 [2024-12-09 12:04:29.918025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.076 [2024-12-09 12:04:29.918034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.076 [2024-12-09 12:04:29.918039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.076 [2024-12-09 12:04:29.918043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.076 [2024-12-09 12:04:29.918056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.076 qpair failed and we were unable to recover it. 00:29:22.076 [2024-12-09 12:04:29.928031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.076 [2024-12-09 12:04:29.928113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.076 [2024-12-09 12:04:29.928123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.076 [2024-12-09 12:04:29.928128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.076 [2024-12-09 12:04:29.928132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.076 [2024-12-09 12:04:29.928142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.076 qpair failed and we were unable to recover it. 00:29:22.076 [2024-12-09 12:04:29.937994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.076 [2024-12-09 12:04:29.938042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.076 [2024-12-09 12:04:29.938052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.076 [2024-12-09 12:04:29.938056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.076 [2024-12-09 12:04:29.938061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.076 [2024-12-09 12:04:29.938071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.076 qpair failed and we were unable to recover it. 00:29:22.076 [2024-12-09 12:04:29.948043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.076 [2024-12-09 12:04:29.948089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.076 [2024-12-09 12:04:29.948098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.076 [2024-12-09 12:04:29.948103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.076 [2024-12-09 12:04:29.948108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.076 [2024-12-09 12:04:29.948117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.076 qpair failed and we were unable to recover it. 00:29:22.076 [2024-12-09 12:04:29.958070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.339 [2024-12-09 12:04:29.958114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.339 [2024-12-09 12:04:29.958125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.339 [2024-12-09 12:04:29.958131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.339 [2024-12-09 12:04:29.958136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.339 [2024-12-09 12:04:29.958146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.339 qpair failed and we were unable to recover it. 00:29:22.340 [2024-12-09 12:04:29.968097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.340 [2024-12-09 12:04:29.968144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.340 [2024-12-09 12:04:29.968154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.340 [2024-12-09 12:04:29.968158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.340 [2024-12-09 12:04:29.968163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.340 [2024-12-09 12:04:29.968172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.340 qpair failed and we were unable to recover it. 00:29:22.340 [2024-12-09 12:04:29.978129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.340 [2024-12-09 12:04:29.978214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.340 [2024-12-09 12:04:29.978223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.340 [2024-12-09 12:04:29.978228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.340 [2024-12-09 12:04:29.978232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.340 [2024-12-09 12:04:29.978242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.340 qpair failed and we were unable to recover it. 00:29:22.340 [2024-12-09 12:04:29.988031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.340 [2024-12-09 12:04:29.988082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.340 [2024-12-09 12:04:29.988091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.340 [2024-12-09 12:04:29.988096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.340 [2024-12-09 12:04:29.988100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.340 [2024-12-09 12:04:29.988110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.340 qpair failed and we were unable to recover it. 00:29:22.340 [2024-12-09 12:04:29.998176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.340 [2024-12-09 12:04:29.998222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.340 [2024-12-09 12:04:29.998232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.340 [2024-12-09 12:04:29.998236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.340 [2024-12-09 12:04:29.998241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.340 [2024-12-09 12:04:29.998250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.340 qpair failed and we were unable to recover it. 00:29:22.340 [2024-12-09 12:04:30.008345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.340 [2024-12-09 12:04:30.008412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.340 [2024-12-09 12:04:30.008426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.340 [2024-12-09 12:04:30.008432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.340 [2024-12-09 12:04:30.008436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.340 [2024-12-09 12:04:30.008447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.340 qpair failed and we were unable to recover it. 00:29:22.340 [2024-12-09 12:04:30.018260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.340 [2024-12-09 12:04:30.018319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.340 [2024-12-09 12:04:30.018329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.340 [2024-12-09 12:04:30.018334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.340 [2024-12-09 12:04:30.018339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.340 [2024-12-09 12:04:30.018348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.340 qpair failed and we were unable to recover it. 00:29:22.340 [2024-12-09 12:04:30.028279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.340 [2024-12-09 12:04:30.028325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.340 [2024-12-09 12:04:30.028334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.340 [2024-12-09 12:04:30.028339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.340 [2024-12-09 12:04:30.028343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.340 [2024-12-09 12:04:30.028354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.340 qpair failed and we were unable to recover it. 00:29:22.340 [2024-12-09 12:04:30.038285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.340 [2024-12-09 12:04:30.038334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.340 [2024-12-09 12:04:30.038344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.340 [2024-12-09 12:04:30.038349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.340 [2024-12-09 12:04:30.038354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.340 [2024-12-09 12:04:30.038364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.340 qpair failed and we were unable to recover it. 00:29:22.340 [2024-12-09 12:04:30.048340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.340 [2024-12-09 12:04:30.048393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.340 [2024-12-09 12:04:30.048402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.340 [2024-12-09 12:04:30.048407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.340 [2024-12-09 12:04:30.048414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.340 [2024-12-09 12:04:30.048425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.340 qpair failed and we were unable to recover it. 00:29:22.340 [2024-12-09 12:04:30.058223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.340 [2024-12-09 12:04:30.058270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.340 [2024-12-09 12:04:30.058280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.340 [2024-12-09 12:04:30.058285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.340 [2024-12-09 12:04:30.058289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.340 [2024-12-09 12:04:30.058299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.340 qpair failed and we were unable to recover it. 00:29:22.340 [2024-12-09 12:04:30.068366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.340 [2024-12-09 12:04:30.068443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.340 [2024-12-09 12:04:30.068453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.340 [2024-12-09 12:04:30.068457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.340 [2024-12-09 12:04:30.068462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.340 [2024-12-09 12:04:30.068472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.340 qpair failed and we were unable to recover it. 00:29:22.340 [2024-12-09 12:04:30.078366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.340 [2024-12-09 12:04:30.078423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.340 [2024-12-09 12:04:30.078442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.340 [2024-12-09 12:04:30.078448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.340 [2024-12-09 12:04:30.078453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.340 [2024-12-09 12:04:30.078467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.340 qpair failed and we were unable to recover it. 00:29:22.340 [2024-12-09 12:04:30.088326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.340 [2024-12-09 12:04:30.088381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.340 [2024-12-09 12:04:30.088400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.340 [2024-12-09 12:04:30.088406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.340 [2024-12-09 12:04:30.088412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.340 [2024-12-09 12:04:30.088426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.340 qpair failed and we were unable to recover it. 00:29:22.340 [2024-12-09 12:04:30.098450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.341 [2024-12-09 12:04:30.098503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.341 [2024-12-09 12:04:30.098514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.341 [2024-12-09 12:04:30.098520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.341 [2024-12-09 12:04:30.098525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.341 [2024-12-09 12:04:30.098536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.341 qpair failed and we were unable to recover it. 00:29:22.341 [2024-12-09 12:04:30.108471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.341 [2024-12-09 12:04:30.108522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.341 [2024-12-09 12:04:30.108532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.341 [2024-12-09 12:04:30.108537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.341 [2024-12-09 12:04:30.108541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.341 [2024-12-09 12:04:30.108552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.341 qpair failed and we were unable to recover it. 00:29:22.341 [2024-12-09 12:04:30.118486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.341 [2024-12-09 12:04:30.118532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.341 [2024-12-09 12:04:30.118542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.341 [2024-12-09 12:04:30.118546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.341 [2024-12-09 12:04:30.118551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.341 [2024-12-09 12:04:30.118561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.341 qpair failed and we were unable to recover it. 00:29:22.341 [2024-12-09 12:04:30.128526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.341 [2024-12-09 12:04:30.128573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.341 [2024-12-09 12:04:30.128582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.341 [2024-12-09 12:04:30.128587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.341 [2024-12-09 12:04:30.128592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.341 [2024-12-09 12:04:30.128602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.341 qpair failed and we were unable to recover it. 00:29:22.341 [2024-12-09 12:04:30.138530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.341 [2024-12-09 12:04:30.138585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.341 [2024-12-09 12:04:30.138598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.341 [2024-12-09 12:04:30.138603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.341 [2024-12-09 12:04:30.138608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.341 [2024-12-09 12:04:30.138619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.341 qpair failed and we were unable to recover it. 00:29:22.341 [2024-12-09 12:04:30.148580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.341 [2024-12-09 12:04:30.148624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.341 [2024-12-09 12:04:30.148634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.341 [2024-12-09 12:04:30.148642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.341 [2024-12-09 12:04:30.148646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.341 [2024-12-09 12:04:30.148656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.341 qpair failed and we were unable to recover it. 00:29:22.341 [2024-12-09 12:04:30.158533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.341 [2024-12-09 12:04:30.158584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.341 [2024-12-09 12:04:30.158594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.341 [2024-12-09 12:04:30.158599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.341 [2024-12-09 12:04:30.158603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.341 [2024-12-09 12:04:30.158614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.341 qpair failed and we were unable to recover it. 00:29:22.341 [2024-12-09 12:04:30.168675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.341 [2024-12-09 12:04:30.168735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.341 [2024-12-09 12:04:30.168745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.341 [2024-12-09 12:04:30.168749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.341 [2024-12-09 12:04:30.168754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.341 [2024-12-09 12:04:30.168764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.341 qpair failed and we were unable to recover it. 00:29:22.341 [2024-12-09 12:04:30.178683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.341 [2024-12-09 12:04:30.178736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.341 [2024-12-09 12:04:30.178746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.341 [2024-12-09 12:04:30.178751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.341 [2024-12-09 12:04:30.178758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.341 [2024-12-09 12:04:30.178769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.341 qpair failed and we were unable to recover it. 00:29:22.341 [2024-12-09 12:04:30.188700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.341 [2024-12-09 12:04:30.188743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.341 [2024-12-09 12:04:30.188753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.341 [2024-12-09 12:04:30.188757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.341 [2024-12-09 12:04:30.188762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.341 [2024-12-09 12:04:30.188772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.341 qpair failed and we were unable to recover it. 00:29:22.341 [2024-12-09 12:04:30.198734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.341 [2024-12-09 12:04:30.198797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.341 [2024-12-09 12:04:30.198806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.341 [2024-12-09 12:04:30.198811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.341 [2024-12-09 12:04:30.198815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.341 [2024-12-09 12:04:30.198825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.341 qpair failed and we were unable to recover it. 00:29:22.341 [2024-12-09 12:04:30.208751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.341 [2024-12-09 12:04:30.208803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.341 [2024-12-09 12:04:30.208813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.341 [2024-12-09 12:04:30.208818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.341 [2024-12-09 12:04:30.208822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.341 [2024-12-09 12:04:30.208832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.341 qpair failed and we were unable to recover it. 00:29:22.341 [2024-12-09 12:04:30.218780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.341 [2024-12-09 12:04:30.218865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.341 [2024-12-09 12:04:30.218874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.341 [2024-12-09 12:04:30.218879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.341 [2024-12-09 12:04:30.218884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.341 [2024-12-09 12:04:30.218894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.341 qpair failed and we were unable to recover it. 00:29:22.604 [2024-12-09 12:04:30.228802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.604 [2024-12-09 12:04:30.228848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.604 [2024-12-09 12:04:30.228857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.604 [2024-12-09 12:04:30.228862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.604 [2024-12-09 12:04:30.228866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.604 [2024-12-09 12:04:30.228877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.604 qpair failed and we were unable to recover it. 00:29:22.604 [2024-12-09 12:04:30.238832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.604 [2024-12-09 12:04:30.238883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.604 [2024-12-09 12:04:30.238893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.604 [2024-12-09 12:04:30.238898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.604 [2024-12-09 12:04:30.238903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.604 [2024-12-09 12:04:30.238913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.604 qpair failed and we were unable to recover it. 00:29:22.604 [2024-12-09 12:04:30.248867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.604 [2024-12-09 12:04:30.248915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.604 [2024-12-09 12:04:30.248924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.604 [2024-12-09 12:04:30.248929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.604 [2024-12-09 12:04:30.248933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.604 [2024-12-09 12:04:30.248943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.604 qpair failed and we were unable to recover it. 00:29:22.604 [2024-12-09 12:04:30.258901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.604 [2024-12-09 12:04:30.258956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.604 [2024-12-09 12:04:30.258965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.604 [2024-12-09 12:04:30.258970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.605 [2024-12-09 12:04:30.258974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.605 [2024-12-09 12:04:30.258984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.605 qpair failed and we were unable to recover it. 00:29:22.605 [2024-12-09 12:04:30.268910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.605 [2024-12-09 12:04:30.268960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.605 [2024-12-09 12:04:30.268972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.605 [2024-12-09 12:04:30.268977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.605 [2024-12-09 12:04:30.268981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.605 [2024-12-09 12:04:30.268991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.605 qpair failed and we were unable to recover it. 00:29:22.605 [2024-12-09 12:04:30.278956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.605 [2024-12-09 12:04:30.279036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.605 [2024-12-09 12:04:30.279045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.605 [2024-12-09 12:04:30.279050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.605 [2024-12-09 12:04:30.279054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.605 [2024-12-09 12:04:30.279064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.605 qpair failed and we were unable to recover it. 00:29:22.605 [2024-12-09 12:04:30.288990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.605 [2024-12-09 12:04:30.289039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.605 [2024-12-09 12:04:30.289048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.605 [2024-12-09 12:04:30.289053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.605 [2024-12-09 12:04:30.289057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.605 [2024-12-09 12:04:30.289067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.605 qpair failed and we were unable to recover it. 00:29:22.605 [2024-12-09 12:04:30.299009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.605 [2024-12-09 12:04:30.299063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.605 [2024-12-09 12:04:30.299072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.605 [2024-12-09 12:04:30.299077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.605 [2024-12-09 12:04:30.299081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.605 [2024-12-09 12:04:30.299091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.605 qpair failed and we were unable to recover it. 00:29:22.605 [2024-12-09 12:04:30.308920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.605 [2024-12-09 12:04:30.308968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.605 [2024-12-09 12:04:30.308977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.605 [2024-12-09 12:04:30.308985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.605 [2024-12-09 12:04:30.308989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.605 [2024-12-09 12:04:30.308999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.605 qpair failed and we were unable to recover it. 00:29:22.605 [2024-12-09 12:04:30.319054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.605 [2024-12-09 12:04:30.319099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.605 [2024-12-09 12:04:30.319108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.605 [2024-12-09 12:04:30.319113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.605 [2024-12-09 12:04:30.319117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.605 [2024-12-09 12:04:30.319127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.605 qpair failed and we were unable to recover it. 00:29:22.605 [2024-12-09 12:04:30.329096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.605 [2024-12-09 12:04:30.329142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.605 [2024-12-09 12:04:30.329153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.605 [2024-12-09 12:04:30.329158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.605 [2024-12-09 12:04:30.329162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.605 [2024-12-09 12:04:30.329172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.605 qpair failed and we were unable to recover it. 00:29:22.605 [2024-12-09 12:04:30.339133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.605 [2024-12-09 12:04:30.339186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.605 [2024-12-09 12:04:30.339197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.605 [2024-12-09 12:04:30.339202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.605 [2024-12-09 12:04:30.339207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.605 [2024-12-09 12:04:30.339218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.605 qpair failed and we were unable to recover it. 00:29:22.605 [2024-12-09 12:04:30.349152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.605 [2024-12-09 12:04:30.349199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.605 [2024-12-09 12:04:30.349208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.605 [2024-12-09 12:04:30.349213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.605 [2024-12-09 12:04:30.349217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.605 [2024-12-09 12:04:30.349230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.605 qpair failed and we were unable to recover it. 00:29:22.605 [2024-12-09 12:04:30.359037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.605 [2024-12-09 12:04:30.359083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.605 [2024-12-09 12:04:30.359094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.605 [2024-12-09 12:04:30.359099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.605 [2024-12-09 12:04:30.359103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.605 [2024-12-09 12:04:30.359113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.605 qpair failed and we were unable to recover it. 00:29:22.605 [2024-12-09 12:04:30.369209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.605 [2024-12-09 12:04:30.369264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.605 [2024-12-09 12:04:30.369274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.605 [2024-12-09 12:04:30.369279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.605 [2024-12-09 12:04:30.369283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.605 [2024-12-09 12:04:30.369293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.605 qpair failed and we were unable to recover it. 00:29:22.605 [2024-12-09 12:04:30.379217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.605 [2024-12-09 12:04:30.379296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.605 [2024-12-09 12:04:30.379305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.605 [2024-12-09 12:04:30.379310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.605 [2024-12-09 12:04:30.379314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.605 [2024-12-09 12:04:30.379325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.605 qpair failed and we were unable to recover it. 00:29:22.605 [2024-12-09 12:04:30.389266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.605 [2024-12-09 12:04:30.389310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.605 [2024-12-09 12:04:30.389320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.605 [2024-12-09 12:04:30.389324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.605 [2024-12-09 12:04:30.389329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.605 [2024-12-09 12:04:30.389339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.605 qpair failed and we were unable to recover it. 00:29:22.605 [2024-12-09 12:04:30.399285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.606 [2024-12-09 12:04:30.399339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.606 [2024-12-09 12:04:30.399348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.606 [2024-12-09 12:04:30.399353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.606 [2024-12-09 12:04:30.399358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.606 [2024-12-09 12:04:30.399368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.606 qpair failed and we were unable to recover it. 00:29:22.606 [2024-12-09 12:04:30.409328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.606 [2024-12-09 12:04:30.409380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.606 [2024-12-09 12:04:30.409393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.606 [2024-12-09 12:04:30.409398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.606 [2024-12-09 12:04:30.409403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.606 [2024-12-09 12:04:30.409414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.606 qpair failed and we were unable to recover it. 00:29:22.606 [2024-12-09 12:04:30.419381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.606 [2024-12-09 12:04:30.419431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.606 [2024-12-09 12:04:30.419442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.606 [2024-12-09 12:04:30.419446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.606 [2024-12-09 12:04:30.419451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.606 [2024-12-09 12:04:30.419461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.606 qpair failed and we were unable to recover it. 00:29:22.606 [2024-12-09 12:04:30.429347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.606 [2024-12-09 12:04:30.429391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.606 [2024-12-09 12:04:30.429401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.606 [2024-12-09 12:04:30.429406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.606 [2024-12-09 12:04:30.429410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.606 [2024-12-09 12:04:30.429420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.606 qpair failed and we were unable to recover it. 00:29:22.606 [2024-12-09 12:04:30.439403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.606 [2024-12-09 12:04:30.439451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.606 [2024-12-09 12:04:30.439461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.606 [2024-12-09 12:04:30.439469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.606 [2024-12-09 12:04:30.439474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.606 [2024-12-09 12:04:30.439485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.606 qpair failed and we were unable to recover it. 00:29:22.606 [2024-12-09 12:04:30.449431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.606 [2024-12-09 12:04:30.449479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.606 [2024-12-09 12:04:30.449489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.606 [2024-12-09 12:04:30.449494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.606 [2024-12-09 12:04:30.449498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.606 [2024-12-09 12:04:30.449509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.606 qpair failed and we were unable to recover it. 00:29:22.606 [2024-12-09 12:04:30.459458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.606 [2024-12-09 12:04:30.459510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.606 [2024-12-09 12:04:30.459520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.606 [2024-12-09 12:04:30.459525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.606 [2024-12-09 12:04:30.459529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.606 [2024-12-09 12:04:30.459539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.606 qpair failed and we were unable to recover it. 00:29:22.606 [2024-12-09 12:04:30.469472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.606 [2024-12-09 12:04:30.469519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.606 [2024-12-09 12:04:30.469529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.606 [2024-12-09 12:04:30.469533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.606 [2024-12-09 12:04:30.469538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.606 [2024-12-09 12:04:30.469548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.606 qpair failed and we were unable to recover it. 00:29:22.606 [2024-12-09 12:04:30.479502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.606 [2024-12-09 12:04:30.479556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.606 [2024-12-09 12:04:30.479566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.606 [2024-12-09 12:04:30.479571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.606 [2024-12-09 12:04:30.479575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.606 [2024-12-09 12:04:30.479590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.606 qpair failed and we were unable to recover it. 00:29:22.868 [2024-12-09 12:04:30.489531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.868 [2024-12-09 12:04:30.489580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.868 [2024-12-09 12:04:30.489590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.868 [2024-12-09 12:04:30.489595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.868 [2024-12-09 12:04:30.489599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.868 [2024-12-09 12:04:30.489609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.868 qpair failed and we were unable to recover it. 00:29:22.868 [2024-12-09 12:04:30.499437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.868 [2024-12-09 12:04:30.499483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.868 [2024-12-09 12:04:30.499493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.868 [2024-12-09 12:04:30.499498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.868 [2024-12-09 12:04:30.499502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.868 [2024-12-09 12:04:30.499512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.868 qpair failed and we were unable to recover it. 00:29:22.868 [2024-12-09 12:04:30.509584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.869 [2024-12-09 12:04:30.509630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.869 [2024-12-09 12:04:30.509643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.869 [2024-12-09 12:04:30.509648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.869 [2024-12-09 12:04:30.509652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.869 [2024-12-09 12:04:30.509662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.869 qpair failed and we were unable to recover it. 00:29:22.869 [2024-12-09 12:04:30.519737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.869 [2024-12-09 12:04:30.519785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.869 [2024-12-09 12:04:30.519794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.869 [2024-12-09 12:04:30.519799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.869 [2024-12-09 12:04:30.519803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.869 [2024-12-09 12:04:30.519813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.869 qpair failed and we were unable to recover it. 00:29:22.869 [2024-12-09 12:04:30.529527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.869 [2024-12-09 12:04:30.529577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.869 [2024-12-09 12:04:30.529588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.869 [2024-12-09 12:04:30.529593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.869 [2024-12-09 12:04:30.529597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.869 [2024-12-09 12:04:30.529607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.869 qpair failed and we were unable to recover it. 00:29:22.869 [2024-12-09 12:04:30.539685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.869 [2024-12-09 12:04:30.539734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.869 [2024-12-09 12:04:30.539745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.869 [2024-12-09 12:04:30.539749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.869 [2024-12-09 12:04:30.539754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.869 [2024-12-09 12:04:30.539764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.869 qpair failed and we were unable to recover it. 00:29:22.869 [2024-12-09 12:04:30.549686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.869 [2024-12-09 12:04:30.549737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.869 [2024-12-09 12:04:30.549747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.869 [2024-12-09 12:04:30.549752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.869 [2024-12-09 12:04:30.549756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.869 [2024-12-09 12:04:30.549766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.869 qpair failed and we were unable to recover it. 00:29:22.869 [2024-12-09 12:04:30.559740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.869 [2024-12-09 12:04:30.559787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.869 [2024-12-09 12:04:30.559797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.869 [2024-12-09 12:04:30.559801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.869 [2024-12-09 12:04:30.559806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.869 [2024-12-09 12:04:30.559816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.869 qpair failed and we were unable to recover it. 00:29:22.869 [2024-12-09 12:04:30.569753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.869 [2024-12-09 12:04:30.569799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.869 [2024-12-09 12:04:30.569810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.869 [2024-12-09 12:04:30.569815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.869 [2024-12-09 12:04:30.569820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.869 [2024-12-09 12:04:30.569830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.869 qpair failed and we were unable to recover it. 00:29:22.869 [2024-12-09 12:04:30.579788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.869 [2024-12-09 12:04:30.579839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.869 [2024-12-09 12:04:30.579848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.869 [2024-12-09 12:04:30.579853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.869 [2024-12-09 12:04:30.579857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.869 [2024-12-09 12:04:30.579868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.869 qpair failed and we were unable to recover it. 00:29:22.869 [2024-12-09 12:04:30.589810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.869 [2024-12-09 12:04:30.589862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.869 [2024-12-09 12:04:30.589871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.869 [2024-12-09 12:04:30.589876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.869 [2024-12-09 12:04:30.589880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.869 [2024-12-09 12:04:30.589890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.869 qpair failed and we were unable to recover it. 00:29:22.869 [2024-12-09 12:04:30.599793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.869 [2024-12-09 12:04:30.599856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.869 [2024-12-09 12:04:30.599866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.869 [2024-12-09 12:04:30.599870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.869 [2024-12-09 12:04:30.599874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.869 [2024-12-09 12:04:30.599884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.869 qpair failed and we were unable to recover it. 00:29:22.869 [2024-12-09 12:04:30.609822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.869 [2024-12-09 12:04:30.609873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.869 [2024-12-09 12:04:30.609882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.869 [2024-12-09 12:04:30.609887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.869 [2024-12-09 12:04:30.609894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.869 [2024-12-09 12:04:30.609904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.869 qpair failed and we were unable to recover it. 00:29:22.869 [2024-12-09 12:04:30.619917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.869 [2024-12-09 12:04:30.619975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.869 [2024-12-09 12:04:30.619985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.869 [2024-12-09 12:04:30.619990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.869 [2024-12-09 12:04:30.619994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.869 [2024-12-09 12:04:30.620004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.869 qpair failed and we were unable to recover it. 00:29:22.869 [2024-12-09 12:04:30.629882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.869 [2024-12-09 12:04:30.629929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.869 [2024-12-09 12:04:30.629938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.869 [2024-12-09 12:04:30.629943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.869 [2024-12-09 12:04:30.629948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.869 [2024-12-09 12:04:30.629958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.869 qpair failed and we were unable to recover it. 00:29:22.869 [2024-12-09 12:04:30.639925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.869 [2024-12-09 12:04:30.639978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.869 [2024-12-09 12:04:30.639988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.870 [2024-12-09 12:04:30.639992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.870 [2024-12-09 12:04:30.639997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.870 [2024-12-09 12:04:30.640007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.870 qpair failed and we were unable to recover it. 00:29:22.870 [2024-12-09 12:04:30.649970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.870 [2024-12-09 12:04:30.650021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.870 [2024-12-09 12:04:30.650030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.870 [2024-12-09 12:04:30.650035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.870 [2024-12-09 12:04:30.650039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.870 [2024-12-09 12:04:30.650049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.870 qpair failed and we were unable to recover it. 00:29:22.870 [2024-12-09 12:04:30.659994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.870 [2024-12-09 12:04:30.660044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.870 [2024-12-09 12:04:30.660054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.870 [2024-12-09 12:04:30.660059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.870 [2024-12-09 12:04:30.660063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.870 [2024-12-09 12:04:30.660073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.870 qpair failed and we were unable to recover it. 00:29:22.870 [2024-12-09 12:04:30.670018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.870 [2024-12-09 12:04:30.670066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.870 [2024-12-09 12:04:30.670075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.870 [2024-12-09 12:04:30.670080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.870 [2024-12-09 12:04:30.670084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.870 [2024-12-09 12:04:30.670094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.870 qpair failed and we were unable to recover it. 00:29:22.870 [2024-12-09 12:04:30.680061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.870 [2024-12-09 12:04:30.680113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.870 [2024-12-09 12:04:30.680123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.870 [2024-12-09 12:04:30.680128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.870 [2024-12-09 12:04:30.680132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.870 [2024-12-09 12:04:30.680142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.870 qpair failed and we were unable to recover it. 00:29:22.870 [2024-12-09 12:04:30.690083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.870 [2024-12-09 12:04:30.690133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.870 [2024-12-09 12:04:30.690142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.870 [2024-12-09 12:04:30.690147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.870 [2024-12-09 12:04:30.690151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.870 [2024-12-09 12:04:30.690160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.870 qpair failed and we were unable to recover it. 00:29:22.870 [2024-12-09 12:04:30.700118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.870 [2024-12-09 12:04:30.700170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.870 [2024-12-09 12:04:30.700182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.870 [2024-12-09 12:04:30.700187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.870 [2024-12-09 12:04:30.700191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.870 [2024-12-09 12:04:30.700201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.870 qpair failed and we were unable to recover it. 00:29:22.870 [2024-12-09 12:04:30.710135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.870 [2024-12-09 12:04:30.710179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.870 [2024-12-09 12:04:30.710190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.870 [2024-12-09 12:04:30.710195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.870 [2024-12-09 12:04:30.710199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.870 [2024-12-09 12:04:30.710209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.870 qpair failed and we were unable to recover it. 00:29:22.870 [2024-12-09 12:04:30.720148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.870 [2024-12-09 12:04:30.720205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.870 [2024-12-09 12:04:30.720215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.870 [2024-12-09 12:04:30.720219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.870 [2024-12-09 12:04:30.720224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.870 [2024-12-09 12:04:30.720234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.870 qpair failed and we were unable to recover it. 00:29:22.870 [2024-12-09 12:04:30.730199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.870 [2024-12-09 12:04:30.730248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.870 [2024-12-09 12:04:30.730257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.870 [2024-12-09 12:04:30.730262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.870 [2024-12-09 12:04:30.730266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.870 [2024-12-09 12:04:30.730276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.870 qpair failed and we were unable to recover it. 00:29:22.870 [2024-12-09 12:04:30.740230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.870 [2024-12-09 12:04:30.740277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.870 [2024-12-09 12:04:30.740287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.870 [2024-12-09 12:04:30.740291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.870 [2024-12-09 12:04:30.740298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.870 [2024-12-09 12:04:30.740309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.870 qpair failed and we were unable to recover it. 00:29:22.870 [2024-12-09 12:04:30.750246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.870 [2024-12-09 12:04:30.750299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.870 [2024-12-09 12:04:30.750309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.870 [2024-12-09 12:04:30.750314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.870 [2024-12-09 12:04:30.750318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:22.870 [2024-12-09 12:04:30.750328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:22.870 qpair failed and we were unable to recover it. 00:29:23.134 [2024-12-09 12:04:30.760232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.134 [2024-12-09 12:04:30.760288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.134 [2024-12-09 12:04:30.760298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.134 [2024-12-09 12:04:30.760303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.134 [2024-12-09 12:04:30.760307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.134 [2024-12-09 12:04:30.760317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-12-09 12:04:30.770313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.134 [2024-12-09 12:04:30.770363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.134 [2024-12-09 12:04:30.770373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.134 [2024-12-09 12:04:30.770378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.134 [2024-12-09 12:04:30.770382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.134 [2024-12-09 12:04:30.770392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-12-09 12:04:30.780298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.134 [2024-12-09 12:04:30.780345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.134 [2024-12-09 12:04:30.780355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.134 [2024-12-09 12:04:30.780359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.134 [2024-12-09 12:04:30.780364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.134 [2024-12-09 12:04:30.780374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-12-09 12:04:30.790250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.134 [2024-12-09 12:04:30.790295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.134 [2024-12-09 12:04:30.790306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.134 [2024-12-09 12:04:30.790311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.134 [2024-12-09 12:04:30.790315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.134 [2024-12-09 12:04:30.790326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-12-09 12:04:30.800320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.134 [2024-12-09 12:04:30.800364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.134 [2024-12-09 12:04:30.800377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.134 [2024-12-09 12:04:30.800382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.134 [2024-12-09 12:04:30.800386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.134 [2024-12-09 12:04:30.800397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-12-09 12:04:30.810408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.134 [2024-12-09 12:04:30.810461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.134 [2024-12-09 12:04:30.810471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.134 [2024-12-09 12:04:30.810476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.134 [2024-12-09 12:04:30.810480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.134 [2024-12-09 12:04:30.810490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-12-09 12:04:30.820329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.134 [2024-12-09 12:04:30.820389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.134 [2024-12-09 12:04:30.820399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.134 [2024-12-09 12:04:30.820404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.134 [2024-12-09 12:04:30.820408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.134 [2024-12-09 12:04:30.820418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-12-09 12:04:30.830469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.134 [2024-12-09 12:04:30.830522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.134 [2024-12-09 12:04:30.830535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.134 [2024-12-09 12:04:30.830539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.134 [2024-12-09 12:04:30.830544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.134 [2024-12-09 12:04:30.830554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-12-09 12:04:30.840456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.134 [2024-12-09 12:04:30.840501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.134 [2024-12-09 12:04:30.840511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.134 [2024-12-09 12:04:30.840516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.134 [2024-12-09 12:04:30.840520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.134 [2024-12-09 12:04:30.840530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-12-09 12:04:30.850539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.134 [2024-12-09 12:04:30.850590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.134 [2024-12-09 12:04:30.850599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.134 [2024-12-09 12:04:30.850604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.134 [2024-12-09 12:04:30.850608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.134 [2024-12-09 12:04:30.850618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-12-09 12:04:30.860571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.134 [2024-12-09 12:04:30.860624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.134 [2024-12-09 12:04:30.860634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.134 [2024-12-09 12:04:30.860642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.134 [2024-12-09 12:04:30.860647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.134 [2024-12-09 12:04:30.860658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-12-09 12:04:30.870588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.135 [2024-12-09 12:04:30.870643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.135 [2024-12-09 12:04:30.870653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.135 [2024-12-09 12:04:30.870661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.135 [2024-12-09 12:04:30.870665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.135 [2024-12-09 12:04:30.870676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-12-09 12:04:30.880457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.135 [2024-12-09 12:04:30.880500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.135 [2024-12-09 12:04:30.880510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.135 [2024-12-09 12:04:30.880515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.135 [2024-12-09 12:04:30.880519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.135 [2024-12-09 12:04:30.880530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-12-09 12:04:30.890650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.135 [2024-12-09 12:04:30.890699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.135 [2024-12-09 12:04:30.890708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.135 [2024-12-09 12:04:30.890713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.135 [2024-12-09 12:04:30.890717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.135 [2024-12-09 12:04:30.890727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-12-09 12:04:30.900643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.135 [2024-12-09 12:04:30.900692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.135 [2024-12-09 12:04:30.900702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.135 [2024-12-09 12:04:30.900707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.135 [2024-12-09 12:04:30.900711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.135 [2024-12-09 12:04:30.900721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-12-09 12:04:30.910684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.135 [2024-12-09 12:04:30.910727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.135 [2024-12-09 12:04:30.910736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.135 [2024-12-09 12:04:30.910741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.135 [2024-12-09 12:04:30.910745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.135 [2024-12-09 12:04:30.910758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-12-09 12:04:30.920689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.135 [2024-12-09 12:04:30.920743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.135 [2024-12-09 12:04:30.920753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.135 [2024-12-09 12:04:30.920758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.135 [2024-12-09 12:04:30.920762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.135 [2024-12-09 12:04:30.920772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-12-09 12:04:30.930734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.135 [2024-12-09 12:04:30.930792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.135 [2024-12-09 12:04:30.930802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.135 [2024-12-09 12:04:30.930806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.135 [2024-12-09 12:04:30.930811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.135 [2024-12-09 12:04:30.930821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-12-09 12:04:30.940784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.135 [2024-12-09 12:04:30.940848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.135 [2024-12-09 12:04:30.940858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.135 [2024-12-09 12:04:30.940863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.135 [2024-12-09 12:04:30.940867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.135 [2024-12-09 12:04:30.940877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-12-09 12:04:30.950796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.135 [2024-12-09 12:04:30.950844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.135 [2024-12-09 12:04:30.950853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.135 [2024-12-09 12:04:30.950858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.135 [2024-12-09 12:04:30.950863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.135 [2024-12-09 12:04:30.950873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-12-09 12:04:30.960668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.135 [2024-12-09 12:04:30.960728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.135 [2024-12-09 12:04:30.960738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.135 [2024-12-09 12:04:30.960742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.135 [2024-12-09 12:04:30.960747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.135 [2024-12-09 12:04:30.960757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-12-09 12:04:30.970857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.135 [2024-12-09 12:04:30.970906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.135 [2024-12-09 12:04:30.970915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.135 [2024-12-09 12:04:30.970920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.135 [2024-12-09 12:04:30.970924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.135 [2024-12-09 12:04:30.970934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-12-09 12:04:30.980854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.135 [2024-12-09 12:04:30.980898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.135 [2024-12-09 12:04:30.980907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.135 [2024-12-09 12:04:30.980912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.135 [2024-12-09 12:04:30.980917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.135 [2024-12-09 12:04:30.980926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-12-09 12:04:30.990882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.135 [2024-12-09 12:04:30.990921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.135 [2024-12-09 12:04:30.990930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.135 [2024-12-09 12:04:30.990935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.135 [2024-12-09 12:04:30.990940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.135 [2024-12-09 12:04:30.990949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-12-09 12:04:31.000911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.135 [2024-12-09 12:04:31.000951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.135 [2024-12-09 12:04:31.000961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.135 [2024-12-09 12:04:31.000968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.136 [2024-12-09 12:04:31.000973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.136 [2024-12-09 12:04:31.000983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-12-09 12:04:31.010977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.136 [2024-12-09 12:04:31.011025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.136 [2024-12-09 12:04:31.011035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.136 [2024-12-09 12:04:31.011040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.136 [2024-12-09 12:04:31.011044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.136 [2024-12-09 12:04:31.011054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.398 [2024-12-09 12:04:31.020964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.398 [2024-12-09 12:04:31.021009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.398 [2024-12-09 12:04:31.021018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.398 [2024-12-09 12:04:31.021023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.398 [2024-12-09 12:04:31.021028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.398 [2024-12-09 12:04:31.021037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.398 qpair failed and we were unable to recover it. 00:29:23.398 [2024-12-09 12:04:31.030971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.398 [2024-12-09 12:04:31.031014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.398 [2024-12-09 12:04:31.031024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.398 [2024-12-09 12:04:31.031029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.398 [2024-12-09 12:04:31.031033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.398 [2024-12-09 12:04:31.031043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.398 qpair failed and we were unable to recover it. 00:29:23.398 [2024-12-09 12:04:31.040970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.398 [2024-12-09 12:04:31.041012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.398 [2024-12-09 12:04:31.041022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.398 [2024-12-09 12:04:31.041027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.398 [2024-12-09 12:04:31.041031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.398 [2024-12-09 12:04:31.041044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.398 qpair failed and we were unable to recover it. 00:29:23.398 [2024-12-09 12:04:31.051066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.398 [2024-12-09 12:04:31.051118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.398 [2024-12-09 12:04:31.051128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.398 [2024-12-09 12:04:31.051132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.398 [2024-12-09 12:04:31.051137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.398 [2024-12-09 12:04:31.051147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.398 qpair failed and we were unable to recover it. 00:29:23.398 [2024-12-09 12:04:31.061172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.398 [2024-12-09 12:04:31.061229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.398 [2024-12-09 12:04:31.061239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.398 [2024-12-09 12:04:31.061243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.398 [2024-12-09 12:04:31.061248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.398 [2024-12-09 12:04:31.061258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.398 qpair failed and we were unable to recover it. 00:29:23.398 [2024-12-09 12:04:31.071125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.398 [2024-12-09 12:04:31.071171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.398 [2024-12-09 12:04:31.071181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.398 [2024-12-09 12:04:31.071186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.398 [2024-12-09 12:04:31.071190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.398 [2024-12-09 12:04:31.071200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.398 qpair failed and we were unable to recover it. 00:29:23.398 [2024-12-09 12:04:31.081138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.398 [2024-12-09 12:04:31.081178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.398 [2024-12-09 12:04:31.081187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.398 [2024-12-09 12:04:31.081192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.398 [2024-12-09 12:04:31.081196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.398 [2024-12-09 12:04:31.081206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.398 qpair failed and we were unable to recover it. 00:29:23.398 [2024-12-09 12:04:31.091237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.398 [2024-12-09 12:04:31.091284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.398 [2024-12-09 12:04:31.091293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.398 [2024-12-09 12:04:31.091298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.398 [2024-12-09 12:04:31.091302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.398 [2024-12-09 12:04:31.091312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.398 qpair failed and we were unable to recover it. 00:29:23.398 [2024-12-09 12:04:31.101143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.398 [2024-12-09 12:04:31.101189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.398 [2024-12-09 12:04:31.101199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.398 [2024-12-09 12:04:31.101203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.398 [2024-12-09 12:04:31.101208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.398 [2024-12-09 12:04:31.101217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.398 qpair failed and we were unable to recover it. 00:29:23.398 [2024-12-09 12:04:31.111186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.398 [2024-12-09 12:04:31.111235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.398 [2024-12-09 12:04:31.111244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.399 [2024-12-09 12:04:31.111249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.399 [2024-12-09 12:04:31.111253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.399 [2024-12-09 12:04:31.111262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.399 qpair failed and we were unable to recover it. 00:29:23.399 [2024-12-09 12:04:31.121213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.399 [2024-12-09 12:04:31.121253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.399 [2024-12-09 12:04:31.121263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.399 [2024-12-09 12:04:31.121268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.399 [2024-12-09 12:04:31.121272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.399 [2024-12-09 12:04:31.121282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.399 qpair failed and we were unable to recover it. 00:29:23.399 [2024-12-09 12:04:31.131286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.399 [2024-12-09 12:04:31.131333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.399 [2024-12-09 12:04:31.131345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.399 [2024-12-09 12:04:31.131350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.399 [2024-12-09 12:04:31.131354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.399 [2024-12-09 12:04:31.131364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.399 qpair failed and we were unable to recover it. 00:29:23.399 [2024-12-09 12:04:31.141278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.399 [2024-12-09 12:04:31.141326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.399 [2024-12-09 12:04:31.141336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.399 [2024-12-09 12:04:31.141340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.399 [2024-12-09 12:04:31.141345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.399 [2024-12-09 12:04:31.141354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.399 qpair failed and we were unable to recover it. 00:29:23.399 [2024-12-09 12:04:31.151304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.399 [2024-12-09 12:04:31.151391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.399 [2024-12-09 12:04:31.151410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.399 [2024-12-09 12:04:31.151416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.399 [2024-12-09 12:04:31.151420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.399 [2024-12-09 12:04:31.151435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.399 qpair failed and we were unable to recover it. 00:29:23.399 [2024-12-09 12:04:31.161307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.399 [2024-12-09 12:04:31.161355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.399 [2024-12-09 12:04:31.161373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.399 [2024-12-09 12:04:31.161379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.399 [2024-12-09 12:04:31.161384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.399 [2024-12-09 12:04:31.161398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.399 qpair failed and we were unable to recover it. 00:29:23.399 [2024-12-09 12:04:31.171400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.399 [2024-12-09 12:04:31.171452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.399 [2024-12-09 12:04:31.171470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.399 [2024-12-09 12:04:31.171476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.399 [2024-12-09 12:04:31.171488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.399 [2024-12-09 12:04:31.171502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.399 qpair failed and we were unable to recover it. 00:29:23.399 [2024-12-09 12:04:31.181365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.399 [2024-12-09 12:04:31.181416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.399 [2024-12-09 12:04:31.181434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.399 [2024-12-09 12:04:31.181440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.399 [2024-12-09 12:04:31.181445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.399 [2024-12-09 12:04:31.181459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.399 qpair failed and we were unable to recover it. 00:29:23.399 [2024-12-09 12:04:31.191290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.399 [2024-12-09 12:04:31.191337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.399 [2024-12-09 12:04:31.191349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.399 [2024-12-09 12:04:31.191354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.399 [2024-12-09 12:04:31.191359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.399 [2024-12-09 12:04:31.191370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.399 qpair failed and we were unable to recover it. 00:29:23.399 [2024-12-09 12:04:31.201421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.399 [2024-12-09 12:04:31.201507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.399 [2024-12-09 12:04:31.201517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.399 [2024-12-09 12:04:31.201522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.399 [2024-12-09 12:04:31.201526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.399 [2024-12-09 12:04:31.201536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.399 qpair failed and we were unable to recover it. 00:29:23.399 [2024-12-09 12:04:31.211515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.399 [2024-12-09 12:04:31.211565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.399 [2024-12-09 12:04:31.211574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.399 [2024-12-09 12:04:31.211579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.399 [2024-12-09 12:04:31.211584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.399 [2024-12-09 12:04:31.211594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.399 qpair failed and we were unable to recover it. 00:29:23.399 [2024-12-09 12:04:31.221450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.399 [2024-12-09 12:04:31.221495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.399 [2024-12-09 12:04:31.221505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.399 [2024-12-09 12:04:31.221510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.399 [2024-12-09 12:04:31.221514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.399 [2024-12-09 12:04:31.221524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.399 qpair failed and we were unable to recover it. 00:29:23.399 [2024-12-09 12:04:31.231512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.399 [2024-12-09 12:04:31.231558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.399 [2024-12-09 12:04:31.231568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.399 [2024-12-09 12:04:31.231573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.399 [2024-12-09 12:04:31.231577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.399 [2024-12-09 12:04:31.231587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.399 qpair failed and we were unable to recover it. 00:29:23.399 [2024-12-09 12:04:31.241431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.399 [2024-12-09 12:04:31.241477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.399 [2024-12-09 12:04:31.241487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.399 [2024-12-09 12:04:31.241492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.399 [2024-12-09 12:04:31.241496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.399 [2024-12-09 12:04:31.241506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.399 qpair failed and we were unable to recover it. 00:29:23.400 [2024-12-09 12:04:31.251613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.400 [2024-12-09 12:04:31.251671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.400 [2024-12-09 12:04:31.251683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.400 [2024-12-09 12:04:31.251687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.400 [2024-12-09 12:04:31.251692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.400 [2024-12-09 12:04:31.251702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.400 qpair failed and we were unable to recover it. 00:29:23.400 [2024-12-09 12:04:31.261611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.400 [2024-12-09 12:04:31.261662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.400 [2024-12-09 12:04:31.261675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.400 [2024-12-09 12:04:31.261680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.400 [2024-12-09 12:04:31.261684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.400 [2024-12-09 12:04:31.261694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.400 qpair failed and we were unable to recover it. 00:29:23.400 [2024-12-09 12:04:31.271623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.400 [2024-12-09 12:04:31.271680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.400 [2024-12-09 12:04:31.271689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.400 [2024-12-09 12:04:31.271694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.400 [2024-12-09 12:04:31.271698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.400 [2024-12-09 12:04:31.271708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.400 qpair failed and we were unable to recover it. 00:29:23.663 [2024-12-09 12:04:31.281650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.663 [2024-12-09 12:04:31.281702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.663 [2024-12-09 12:04:31.281711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.663 [2024-12-09 12:04:31.281716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.663 [2024-12-09 12:04:31.281721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.663 [2024-12-09 12:04:31.281731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.663 qpair failed and we were unable to recover it. 00:29:23.663 [2024-12-09 12:04:31.291718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.663 [2024-12-09 12:04:31.291766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.663 [2024-12-09 12:04:31.291776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.663 [2024-12-09 12:04:31.291781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.663 [2024-12-09 12:04:31.291785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.663 [2024-12-09 12:04:31.291795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.663 qpair failed and we were unable to recover it. 00:29:23.663 [2024-12-09 12:04:31.301857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.663 [2024-12-09 12:04:31.301911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.663 [2024-12-09 12:04:31.301921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.663 [2024-12-09 12:04:31.301926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.663 [2024-12-09 12:04:31.301934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.663 [2024-12-09 12:04:31.301944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.663 qpair failed and we were unable to recover it. 00:29:23.663 [2024-12-09 12:04:31.311604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.663 [2024-12-09 12:04:31.311653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.663 [2024-12-09 12:04:31.311663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.663 [2024-12-09 12:04:31.311668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.663 [2024-12-09 12:04:31.311672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.663 [2024-12-09 12:04:31.311682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.663 qpair failed and we were unable to recover it. 00:29:23.663 [2024-12-09 12:04:31.321734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.663 [2024-12-09 12:04:31.321778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.663 [2024-12-09 12:04:31.321788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.663 [2024-12-09 12:04:31.321792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.663 [2024-12-09 12:04:31.321797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.663 [2024-12-09 12:04:31.321807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.663 qpair failed and we were unable to recover it. 00:29:23.663 [2024-12-09 12:04:31.331845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.663 [2024-12-09 12:04:31.331894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.663 [2024-12-09 12:04:31.331903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.663 [2024-12-09 12:04:31.331908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.663 [2024-12-09 12:04:31.331912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.663 [2024-12-09 12:04:31.331922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.663 qpair failed and we were unable to recover it. 00:29:23.663 [2024-12-09 12:04:31.341812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.663 [2024-12-09 12:04:31.341854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.663 [2024-12-09 12:04:31.341865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.663 [2024-12-09 12:04:31.341870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.663 [2024-12-09 12:04:31.341875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.663 [2024-12-09 12:04:31.341885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.663 qpair failed and we were unable to recover it. 00:29:23.663 [2024-12-09 12:04:31.351833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.663 [2024-12-09 12:04:31.351877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.663 [2024-12-09 12:04:31.351887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.663 [2024-12-09 12:04:31.351892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.663 [2024-12-09 12:04:31.351896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.663 [2024-12-09 12:04:31.351906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.663 qpair failed and we were unable to recover it. 00:29:23.663 [2024-12-09 12:04:31.361846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.663 [2024-12-09 12:04:31.361888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.663 [2024-12-09 12:04:31.361897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.663 [2024-12-09 12:04:31.361902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.663 [2024-12-09 12:04:31.361906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.663 [2024-12-09 12:04:31.361916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.663 qpair failed and we were unable to recover it. 00:29:23.663 [2024-12-09 12:04:31.371955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.663 [2024-12-09 12:04:31.372003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.663 [2024-12-09 12:04:31.372013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.663 [2024-12-09 12:04:31.372018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.663 [2024-12-09 12:04:31.372022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.663 [2024-12-09 12:04:31.372032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.663 qpair failed and we were unable to recover it. 00:29:23.663 [2024-12-09 12:04:31.381947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.663 [2024-12-09 12:04:31.381993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.663 [2024-12-09 12:04:31.382003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.663 [2024-12-09 12:04:31.382008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.663 [2024-12-09 12:04:31.382012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.663 [2024-12-09 12:04:31.382022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.663 qpair failed and we were unable to recover it. 00:29:23.663 [2024-12-09 12:04:31.391970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.663 [2024-12-09 12:04:31.392020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.663 [2024-12-09 12:04:31.392030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.664 [2024-12-09 12:04:31.392035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.664 [2024-12-09 12:04:31.392039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.664 [2024-12-09 12:04:31.392049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.664 qpair failed and we were unable to recover it. 00:29:23.664 [2024-12-09 12:04:31.401957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.664 [2024-12-09 12:04:31.402000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.664 [2024-12-09 12:04:31.402009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.664 [2024-12-09 12:04:31.402014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.664 [2024-12-09 12:04:31.402018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.664 [2024-12-09 12:04:31.402028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.664 qpair failed and we were unable to recover it. 00:29:23.664 [2024-12-09 12:04:31.412069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.664 [2024-12-09 12:04:31.412118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.664 [2024-12-09 12:04:31.412128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.664 [2024-12-09 12:04:31.412133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.664 [2024-12-09 12:04:31.412137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.664 [2024-12-09 12:04:31.412147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.664 qpair failed and we were unable to recover it. 00:29:23.664 [2024-12-09 12:04:31.422052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.664 [2024-12-09 12:04:31.422099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.664 [2024-12-09 12:04:31.422109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.664 [2024-12-09 12:04:31.422114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.664 [2024-12-09 12:04:31.422118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.664 [2024-12-09 12:04:31.422128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.664 qpair failed and we were unable to recover it. 00:29:23.664 [2024-12-09 12:04:31.432028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.664 [2024-12-09 12:04:31.432074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.664 [2024-12-09 12:04:31.432084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.664 [2024-12-09 12:04:31.432091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.664 [2024-12-09 12:04:31.432096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.664 [2024-12-09 12:04:31.432106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.664 qpair failed and we were unable to recover it. 00:29:23.664 [2024-12-09 12:04:31.441969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.664 [2024-12-09 12:04:31.442014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.664 [2024-12-09 12:04:31.442024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.664 [2024-12-09 12:04:31.442029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.664 [2024-12-09 12:04:31.442033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.664 [2024-12-09 12:04:31.442043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.664 qpair failed and we were unable to recover it. 00:29:23.664 [2024-12-09 12:04:31.452155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.664 [2024-12-09 12:04:31.452207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.664 [2024-12-09 12:04:31.452216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.664 [2024-12-09 12:04:31.452221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.664 [2024-12-09 12:04:31.452226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.664 [2024-12-09 12:04:31.452235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.664 qpair failed and we were unable to recover it. 00:29:23.664 [2024-12-09 12:04:31.462154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.664 [2024-12-09 12:04:31.462206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.664 [2024-12-09 12:04:31.462215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.664 [2024-12-09 12:04:31.462220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.664 [2024-12-09 12:04:31.462224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.664 [2024-12-09 12:04:31.462234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.664 qpair failed and we were unable to recover it. 00:29:23.664 [2024-12-09 12:04:31.472181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.664 [2024-12-09 12:04:31.472226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.664 [2024-12-09 12:04:31.472235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.664 [2024-12-09 12:04:31.472240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.664 [2024-12-09 12:04:31.472244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.664 [2024-12-09 12:04:31.472256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.664 qpair failed and we were unable to recover it. 00:29:23.664 [2024-12-09 12:04:31.482240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.664 [2024-12-09 12:04:31.482280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.664 [2024-12-09 12:04:31.482289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.664 [2024-12-09 12:04:31.482294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.664 [2024-12-09 12:04:31.482298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.664 [2024-12-09 12:04:31.482308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.664 qpair failed and we were unable to recover it. 00:29:23.664 [2024-12-09 12:04:31.492148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.664 [2024-12-09 12:04:31.492198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.664 [2024-12-09 12:04:31.492208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.664 [2024-12-09 12:04:31.492213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.664 [2024-12-09 12:04:31.492218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.664 [2024-12-09 12:04:31.492228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.664 qpair failed and we were unable to recover it. 00:29:23.664 [2024-12-09 12:04:31.502249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.664 [2024-12-09 12:04:31.502294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.664 [2024-12-09 12:04:31.502304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.664 [2024-12-09 12:04:31.502309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.664 [2024-12-09 12:04:31.502314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.664 [2024-12-09 12:04:31.502324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.664 qpair failed and we were unable to recover it. 00:29:23.664 [2024-12-09 12:04:31.512157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.664 [2024-12-09 12:04:31.512197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.664 [2024-12-09 12:04:31.512207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.664 [2024-12-09 12:04:31.512212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.664 [2024-12-09 12:04:31.512217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.664 [2024-12-09 12:04:31.512227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.664 qpair failed and we were unable to recover it. 00:29:23.664 [2024-12-09 12:04:31.522294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.664 [2024-12-09 12:04:31.522345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.664 [2024-12-09 12:04:31.522355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.664 [2024-12-09 12:04:31.522360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.664 [2024-12-09 12:04:31.522364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.664 [2024-12-09 12:04:31.522374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.665 qpair failed and we were unable to recover it. 00:29:23.665 [2024-12-09 12:04:31.532386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.665 [2024-12-09 12:04:31.532436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.665 [2024-12-09 12:04:31.532446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.665 [2024-12-09 12:04:31.532451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.665 [2024-12-09 12:04:31.532456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.665 [2024-12-09 12:04:31.532466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.665 qpair failed and we were unable to recover it. 00:29:23.665 [2024-12-09 12:04:31.542378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.665 [2024-12-09 12:04:31.542428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.665 [2024-12-09 12:04:31.542439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.665 [2024-12-09 12:04:31.542444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.665 [2024-12-09 12:04:31.542449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.665 [2024-12-09 12:04:31.542460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.665 qpair failed and we were unable to recover it. 00:29:23.928 [2024-12-09 12:04:31.552402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.928 [2024-12-09 12:04:31.552448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.928 [2024-12-09 12:04:31.552458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.928 [2024-12-09 12:04:31.552463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.928 [2024-12-09 12:04:31.552467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.928 [2024-12-09 12:04:31.552478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.928 qpair failed and we were unable to recover it. 00:29:23.928 [2024-12-09 12:04:31.562420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.928 [2024-12-09 12:04:31.562465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.928 [2024-12-09 12:04:31.562475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.928 [2024-12-09 12:04:31.562483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.928 [2024-12-09 12:04:31.562488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.928 [2024-12-09 12:04:31.562498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.928 qpair failed and we were unable to recover it. 00:29:23.928 [2024-12-09 12:04:31.572491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.928 [2024-12-09 12:04:31.572589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.928 [2024-12-09 12:04:31.572599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.928 [2024-12-09 12:04:31.572604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.928 [2024-12-09 12:04:31.572609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.928 [2024-12-09 12:04:31.572619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.928 qpair failed and we were unable to recover it. 00:29:23.928 [2024-12-09 12:04:31.582501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.928 [2024-12-09 12:04:31.582547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.928 [2024-12-09 12:04:31.582556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.928 [2024-12-09 12:04:31.582561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.928 [2024-12-09 12:04:31.582566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.928 [2024-12-09 12:04:31.582576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.928 qpair failed and we were unable to recover it. 00:29:23.928 [2024-12-09 12:04:31.592509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.928 [2024-12-09 12:04:31.592557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.928 [2024-12-09 12:04:31.592567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.928 [2024-12-09 12:04:31.592572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.928 [2024-12-09 12:04:31.592576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.928 [2024-12-09 12:04:31.592586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.928 qpair failed and we were unable to recover it. 00:29:23.928 [2024-12-09 12:04:31.602532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.928 [2024-12-09 12:04:31.602574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.928 [2024-12-09 12:04:31.602584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.928 [2024-12-09 12:04:31.602589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.928 [2024-12-09 12:04:31.602593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.928 [2024-12-09 12:04:31.602606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.928 qpair failed and we were unable to recover it. 00:29:23.928 [2024-12-09 12:04:31.612629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.928 [2024-12-09 12:04:31.612681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.928 [2024-12-09 12:04:31.612690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.928 [2024-12-09 12:04:31.612695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.928 [2024-12-09 12:04:31.612699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.928 [2024-12-09 12:04:31.612709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.928 qpair failed and we were unable to recover it. 00:29:23.928 [2024-12-09 12:04:31.622600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.928 [2024-12-09 12:04:31.622650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.928 [2024-12-09 12:04:31.622659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.928 [2024-12-09 12:04:31.622664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.928 [2024-12-09 12:04:31.622669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.928 [2024-12-09 12:04:31.622678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.928 qpair failed and we were unable to recover it. 00:29:23.928 [2024-12-09 12:04:31.632621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.928 [2024-12-09 12:04:31.632665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.928 [2024-12-09 12:04:31.632675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.928 [2024-12-09 12:04:31.632679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.928 [2024-12-09 12:04:31.632684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.928 [2024-12-09 12:04:31.632694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.928 qpair failed and we were unable to recover it. 00:29:23.928 [2024-12-09 12:04:31.642655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.928 [2024-12-09 12:04:31.642698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.928 [2024-12-09 12:04:31.642709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.928 [2024-12-09 12:04:31.642714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.928 [2024-12-09 12:04:31.642718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.928 [2024-12-09 12:04:31.642728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.928 qpair failed and we were unable to recover it. 00:29:23.928 [2024-12-09 12:04:31.652719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.928 [2024-12-09 12:04:31.652768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.929 [2024-12-09 12:04:31.652778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.929 [2024-12-09 12:04:31.652782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.929 [2024-12-09 12:04:31.652787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.929 [2024-12-09 12:04:31.652796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-12-09 12:04:31.662693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.929 [2024-12-09 12:04:31.662735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.929 [2024-12-09 12:04:31.662745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.929 [2024-12-09 12:04:31.662749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.929 [2024-12-09 12:04:31.662754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.929 [2024-12-09 12:04:31.662764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-12-09 12:04:31.672712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.929 [2024-12-09 12:04:31.672751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.929 [2024-12-09 12:04:31.672761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.929 [2024-12-09 12:04:31.672766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.929 [2024-12-09 12:04:31.672770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.929 [2024-12-09 12:04:31.672780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-12-09 12:04:31.682716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.929 [2024-12-09 12:04:31.682758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.929 [2024-12-09 12:04:31.682767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.929 [2024-12-09 12:04:31.682772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.929 [2024-12-09 12:04:31.682776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.929 [2024-12-09 12:04:31.682786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-12-09 12:04:31.692822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.929 [2024-12-09 12:04:31.692869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.929 [2024-12-09 12:04:31.692882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.929 [2024-12-09 12:04:31.692887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.929 [2024-12-09 12:04:31.692891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.929 [2024-12-09 12:04:31.692901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-12-09 12:04:31.702817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.929 [2024-12-09 12:04:31.702865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.929 [2024-12-09 12:04:31.702874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.929 [2024-12-09 12:04:31.702879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.929 [2024-12-09 12:04:31.702883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.929 [2024-12-09 12:04:31.702894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-12-09 12:04:31.712817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.929 [2024-12-09 12:04:31.712864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.929 [2024-12-09 12:04:31.712874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.929 [2024-12-09 12:04:31.712879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.929 [2024-12-09 12:04:31.712883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.929 [2024-12-09 12:04:31.712893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-12-09 12:04:31.722912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.929 [2024-12-09 12:04:31.722954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.929 [2024-12-09 12:04:31.722964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.929 [2024-12-09 12:04:31.722969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.929 [2024-12-09 12:04:31.722973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.929 [2024-12-09 12:04:31.722984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-12-09 12:04:31.732942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.929 [2024-12-09 12:04:31.732992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.929 [2024-12-09 12:04:31.733002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.929 [2024-12-09 12:04:31.733007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.929 [2024-12-09 12:04:31.733014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.929 [2024-12-09 12:04:31.733024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-12-09 12:04:31.742932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.929 [2024-12-09 12:04:31.742998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.929 [2024-12-09 12:04:31.743008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.929 [2024-12-09 12:04:31.743012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.929 [2024-12-09 12:04:31.743017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.929 [2024-12-09 12:04:31.743027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-12-09 12:04:31.752940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.929 [2024-12-09 12:04:31.752982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.929 [2024-12-09 12:04:31.752992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.929 [2024-12-09 12:04:31.752997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.929 [2024-12-09 12:04:31.753001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.929 [2024-12-09 12:04:31.753011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-12-09 12:04:31.762974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.929 [2024-12-09 12:04:31.763039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.929 [2024-12-09 12:04:31.763048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.929 [2024-12-09 12:04:31.763053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.929 [2024-12-09 12:04:31.763057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.929 [2024-12-09 12:04:31.763067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-12-09 12:04:31.773047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.929 [2024-12-09 12:04:31.773098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.929 [2024-12-09 12:04:31.773108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.929 [2024-12-09 12:04:31.773113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.929 [2024-12-09 12:04:31.773117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.929 [2024-12-09 12:04:31.773127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.929 qpair failed and we were unable to recover it. 00:29:23.929 [2024-12-09 12:04:31.782995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.929 [2024-12-09 12:04:31.783061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.929 [2024-12-09 12:04:31.783071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.929 [2024-12-09 12:04:31.783076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.929 [2024-12-09 12:04:31.783080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.930 [2024-12-09 12:04:31.783090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-12-09 12:04:31.793057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.930 [2024-12-09 12:04:31.793102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.930 [2024-12-09 12:04:31.793112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.930 [2024-12-09 12:04:31.793117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.930 [2024-12-09 12:04:31.793121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.930 [2024-12-09 12:04:31.793131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.930 qpair failed and we were unable to recover it. 00:29:23.930 [2024-12-09 12:04:31.803085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.930 [2024-12-09 12:04:31.803127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.930 [2024-12-09 12:04:31.803136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.930 [2024-12-09 12:04:31.803141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.930 [2024-12-09 12:04:31.803145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:23.930 [2024-12-09 12:04:31.803155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.930 qpair failed and we were unable to recover it. 00:29:24.192 [2024-12-09 12:04:31.813158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.192 [2024-12-09 12:04:31.813207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.192 [2024-12-09 12:04:31.813216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.192 [2024-12-09 12:04:31.813221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.192 [2024-12-09 12:04:31.813226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.192 [2024-12-09 12:04:31.813236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.192 qpair failed and we were unable to recover it. 00:29:24.192 [2024-12-09 12:04:31.823146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.192 [2024-12-09 12:04:31.823235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.192 [2024-12-09 12:04:31.823247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.192 [2024-12-09 12:04:31.823252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.192 [2024-12-09 12:04:31.823256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.192 [2024-12-09 12:04:31.823266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.192 qpair failed and we were unable to recover it. 00:29:24.192 [2024-12-09 12:04:31.833159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.192 [2024-12-09 12:04:31.833202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.192 [2024-12-09 12:04:31.833212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.192 [2024-12-09 12:04:31.833216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.192 [2024-12-09 12:04:31.833221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.192 [2024-12-09 12:04:31.833230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.192 qpair failed and we were unable to recover it. 00:29:24.192 [2024-12-09 12:04:31.843199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.192 [2024-12-09 12:04:31.843248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.192 [2024-12-09 12:04:31.843258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.192 [2024-12-09 12:04:31.843263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.192 [2024-12-09 12:04:31.843267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.192 [2024-12-09 12:04:31.843277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.192 qpair failed and we were unable to recover it. 00:29:24.192 [2024-12-09 12:04:31.853241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.192 [2024-12-09 12:04:31.853293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.192 [2024-12-09 12:04:31.853302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.192 [2024-12-09 12:04:31.853307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.192 [2024-12-09 12:04:31.853311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.192 [2024-12-09 12:04:31.853320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.192 qpair failed and we were unable to recover it. 00:29:24.192 [2024-12-09 12:04:31.863245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.192 [2024-12-09 12:04:31.863318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.192 [2024-12-09 12:04:31.863327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.192 [2024-12-09 12:04:31.863332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.192 [2024-12-09 12:04:31.863339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.192 [2024-12-09 12:04:31.863349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.192 qpair failed and we were unable to recover it. 00:29:24.192 [2024-12-09 12:04:31.873273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.192 [2024-12-09 12:04:31.873314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.192 [2024-12-09 12:04:31.873323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.192 [2024-12-09 12:04:31.873328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.192 [2024-12-09 12:04:31.873332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.192 [2024-12-09 12:04:31.873342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.192 qpair failed and we were unable to recover it. 00:29:24.192 [2024-12-09 12:04:31.883151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.192 [2024-12-09 12:04:31.883199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.193 [2024-12-09 12:04:31.883209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.193 [2024-12-09 12:04:31.883215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.193 [2024-12-09 12:04:31.883219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.193 [2024-12-09 12:04:31.883229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.193 qpair failed and we were unable to recover it. 00:29:24.193 [2024-12-09 12:04:31.893310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.193 [2024-12-09 12:04:31.893361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.193 [2024-12-09 12:04:31.893371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.193 [2024-12-09 12:04:31.893375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.193 [2024-12-09 12:04:31.893380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.193 [2024-12-09 12:04:31.893390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.193 qpair failed and we were unable to recover it. 00:29:24.193 [2024-12-09 12:04:31.903349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.193 [2024-12-09 12:04:31.903397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.193 [2024-12-09 12:04:31.903415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.193 [2024-12-09 12:04:31.903421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.193 [2024-12-09 12:04:31.903426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.193 [2024-12-09 12:04:31.903440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.193 qpair failed and we were unable to recover it. 00:29:24.193 [2024-12-09 12:04:31.913388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.193 [2024-12-09 12:04:31.913480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.193 [2024-12-09 12:04:31.913498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.193 [2024-12-09 12:04:31.913505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.193 [2024-12-09 12:04:31.913509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.193 [2024-12-09 12:04:31.913524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.193 qpair failed and we were unable to recover it. 00:29:24.193 [2024-12-09 12:04:31.923386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.193 [2024-12-09 12:04:31.923472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.193 [2024-12-09 12:04:31.923483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.193 [2024-12-09 12:04:31.923488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.193 [2024-12-09 12:04:31.923493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.193 [2024-12-09 12:04:31.923504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.193 qpair failed and we were unable to recover it. 00:29:24.193 [2024-12-09 12:04:31.933473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.193 [2024-12-09 12:04:31.933521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.193 [2024-12-09 12:04:31.933530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.193 [2024-12-09 12:04:31.933535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.193 [2024-12-09 12:04:31.933539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.193 [2024-12-09 12:04:31.933550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.193 qpair failed and we were unable to recover it. 00:29:24.193 [2024-12-09 12:04:31.943429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.193 [2024-12-09 12:04:31.943473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.193 [2024-12-09 12:04:31.943484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.193 [2024-12-09 12:04:31.943489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.193 [2024-12-09 12:04:31.943493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.193 [2024-12-09 12:04:31.943503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.193 qpair failed and we were unable to recover it. 00:29:24.193 [2024-12-09 12:04:31.953472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.193 [2024-12-09 12:04:31.953516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.193 [2024-12-09 12:04:31.953526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.193 [2024-12-09 12:04:31.953531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.193 [2024-12-09 12:04:31.953535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.193 [2024-12-09 12:04:31.953545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.193 qpair failed and we were unable to recover it. 00:29:24.193 [2024-12-09 12:04:31.963506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.193 [2024-12-09 12:04:31.963553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.193 [2024-12-09 12:04:31.963563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.193 [2024-12-09 12:04:31.963568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.193 [2024-12-09 12:04:31.963572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.193 [2024-12-09 12:04:31.963582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.193 qpair failed and we were unable to recover it. 00:29:24.193 [2024-12-09 12:04:31.973544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.193 [2024-12-09 12:04:31.973594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.193 [2024-12-09 12:04:31.973604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.193 [2024-12-09 12:04:31.973609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.193 [2024-12-09 12:04:31.973613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.193 [2024-12-09 12:04:31.973623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.193 qpair failed and we were unable to recover it. 00:29:24.193 [2024-12-09 12:04:31.983561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.193 [2024-12-09 12:04:31.983607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.193 [2024-12-09 12:04:31.983617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.193 [2024-12-09 12:04:31.983622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.193 [2024-12-09 12:04:31.983626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.193 [2024-12-09 12:04:31.983636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.193 qpair failed and we were unable to recover it. 00:29:24.193 [2024-12-09 12:04:31.993591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.193 [2024-12-09 12:04:31.993681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.193 [2024-12-09 12:04:31.993691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.193 [2024-12-09 12:04:31.993699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.193 [2024-12-09 12:04:31.993703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.193 [2024-12-09 12:04:31.993714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.193 qpair failed and we were unable to recover it. 00:29:24.193 [2024-12-09 12:04:32.003620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.193 [2024-12-09 12:04:32.003669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.193 [2024-12-09 12:04:32.003680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.193 [2024-12-09 12:04:32.003685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.193 [2024-12-09 12:04:32.003689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.193 [2024-12-09 12:04:32.003700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.193 qpair failed and we were unable to recover it. 00:29:24.193 [2024-12-09 12:04:32.013646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.193 [2024-12-09 12:04:32.013698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.193 [2024-12-09 12:04:32.013708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.193 [2024-12-09 12:04:32.013713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.193 [2024-12-09 12:04:32.013718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.194 [2024-12-09 12:04:32.013728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.194 qpair failed and we were unable to recover it. 00:29:24.194 [2024-12-09 12:04:32.023689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.194 [2024-12-09 12:04:32.023735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.194 [2024-12-09 12:04:32.023745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.194 [2024-12-09 12:04:32.023749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.194 [2024-12-09 12:04:32.023754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.194 [2024-12-09 12:04:32.023764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.194 qpair failed and we were unable to recover it. 00:29:24.194 [2024-12-09 12:04:32.033695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.194 [2024-12-09 12:04:32.033767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.194 [2024-12-09 12:04:32.033776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.194 [2024-12-09 12:04:32.033781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.194 [2024-12-09 12:04:32.033785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.194 [2024-12-09 12:04:32.033798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.194 qpair failed and we were unable to recover it. 00:29:24.194 [2024-12-09 12:04:32.043705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.194 [2024-12-09 12:04:32.043747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.194 [2024-12-09 12:04:32.043757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.194 [2024-12-09 12:04:32.043762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.194 [2024-12-09 12:04:32.043766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.194 [2024-12-09 12:04:32.043776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.194 qpair failed and we were unable to recover it. 00:29:24.194 [2024-12-09 12:04:32.053797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.194 [2024-12-09 12:04:32.053847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.194 [2024-12-09 12:04:32.053857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.194 [2024-12-09 12:04:32.053861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.194 [2024-12-09 12:04:32.053866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.194 [2024-12-09 12:04:32.053876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.194 qpair failed and we were unable to recover it. 00:29:24.194 [2024-12-09 12:04:32.063786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.194 [2024-12-09 12:04:32.063835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.194 [2024-12-09 12:04:32.063844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.194 [2024-12-09 12:04:32.063849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.194 [2024-12-09 12:04:32.063854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.194 [2024-12-09 12:04:32.063864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.194 qpair failed and we were unable to recover it. 00:29:24.194 [2024-12-09 12:04:32.073791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.194 [2024-12-09 12:04:32.073838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.194 [2024-12-09 12:04:32.073849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.194 [2024-12-09 12:04:32.073854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.194 [2024-12-09 12:04:32.073859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.194 [2024-12-09 12:04:32.073869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.194 qpair failed and we were unable to recover it. 00:29:24.457 [2024-12-09 12:04:32.083826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.457 [2024-12-09 12:04:32.083875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.457 [2024-12-09 12:04:32.083885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.457 [2024-12-09 12:04:32.083890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.457 [2024-12-09 12:04:32.083894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.457 [2024-12-09 12:04:32.083904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-12-09 12:04:32.093891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.457 [2024-12-09 12:04:32.093939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.457 [2024-12-09 12:04:32.093948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.457 [2024-12-09 12:04:32.093953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.457 [2024-12-09 12:04:32.093957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.457 [2024-12-09 12:04:32.093967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-12-09 12:04:32.103865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.457 [2024-12-09 12:04:32.103944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.457 [2024-12-09 12:04:32.103953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.457 [2024-12-09 12:04:32.103958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.457 [2024-12-09 12:04:32.103963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.457 [2024-12-09 12:04:32.103972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-12-09 12:04:32.113931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.457 [2024-12-09 12:04:32.114014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.457 [2024-12-09 12:04:32.114023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.457 [2024-12-09 12:04:32.114028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.457 [2024-12-09 12:04:32.114033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.457 [2024-12-09 12:04:32.114044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-12-09 12:04:32.123909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.457 [2024-12-09 12:04:32.123951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.457 [2024-12-09 12:04:32.123963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.457 [2024-12-09 12:04:32.123968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.457 [2024-12-09 12:04:32.123973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.457 [2024-12-09 12:04:32.123983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-12-09 12:04:32.134015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.457 [2024-12-09 12:04:32.134066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.457 [2024-12-09 12:04:32.134076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.457 [2024-12-09 12:04:32.134081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.457 [2024-12-09 12:04:32.134085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.457 [2024-12-09 12:04:32.134095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-12-09 12:04:32.143996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.457 [2024-12-09 12:04:32.144038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.457 [2024-12-09 12:04:32.144047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.457 [2024-12-09 12:04:32.144052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.457 [2024-12-09 12:04:32.144057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.457 [2024-12-09 12:04:32.144066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-12-09 12:04:32.154017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.457 [2024-12-09 12:04:32.154059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.457 [2024-12-09 12:04:32.154068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.457 [2024-12-09 12:04:32.154073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.457 [2024-12-09 12:04:32.154077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.457 [2024-12-09 12:04:32.154087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-12-09 12:04:32.164198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.457 [2024-12-09 12:04:32.164249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.457 [2024-12-09 12:04:32.164260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.457 [2024-12-09 12:04:32.164265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.457 [2024-12-09 12:04:32.164270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.457 [2024-12-09 12:04:32.164283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-12-09 12:04:32.174121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.457 [2024-12-09 12:04:32.174170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.457 [2024-12-09 12:04:32.174180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.457 [2024-12-09 12:04:32.174185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.457 [2024-12-09 12:04:32.174189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.457 [2024-12-09 12:04:32.174199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-12-09 12:04:32.183975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.457 [2024-12-09 12:04:32.184020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.457 [2024-12-09 12:04:32.184029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.457 [2024-12-09 12:04:32.184034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.457 [2024-12-09 12:04:32.184038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.457 [2024-12-09 12:04:32.184048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-12-09 12:04:32.194126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.457 [2024-12-09 12:04:32.194167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.457 [2024-12-09 12:04:32.194177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.457 [2024-12-09 12:04:32.194182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.457 [2024-12-09 12:04:32.194186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.457 [2024-12-09 12:04:32.194196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-12-09 12:04:32.204147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.457 [2024-12-09 12:04:32.204192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.457 [2024-12-09 12:04:32.204201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.457 [2024-12-09 12:04:32.204206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.457 [2024-12-09 12:04:32.204210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.457 [2024-12-09 12:04:32.204220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-12-09 12:04:32.214216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.458 [2024-12-09 12:04:32.214267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.458 [2024-12-09 12:04:32.214277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.458 [2024-12-09 12:04:32.214281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.458 [2024-12-09 12:04:32.214286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.458 [2024-12-09 12:04:32.214295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-12-09 12:04:32.224089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.458 [2024-12-09 12:04:32.224134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.458 [2024-12-09 12:04:32.224144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.458 [2024-12-09 12:04:32.224149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.458 [2024-12-09 12:04:32.224153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.458 [2024-12-09 12:04:32.224162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-12-09 12:04:32.234234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.458 [2024-12-09 12:04:32.234314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.458 [2024-12-09 12:04:32.234323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.458 [2024-12-09 12:04:32.234328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.458 [2024-12-09 12:04:32.234332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.458 [2024-12-09 12:04:32.234342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-12-09 12:04:32.244256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.458 [2024-12-09 12:04:32.244332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.458 [2024-12-09 12:04:32.244342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.458 [2024-12-09 12:04:32.244347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.458 [2024-12-09 12:04:32.244351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.458 [2024-12-09 12:04:32.244361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-12-09 12:04:32.254321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.458 [2024-12-09 12:04:32.254369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.458 [2024-12-09 12:04:32.254382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.458 [2024-12-09 12:04:32.254386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.458 [2024-12-09 12:04:32.254391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.458 [2024-12-09 12:04:32.254401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-12-09 12:04:32.264293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.458 [2024-12-09 12:04:32.264339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.458 [2024-12-09 12:04:32.264349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.458 [2024-12-09 12:04:32.264353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.458 [2024-12-09 12:04:32.264358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.458 [2024-12-09 12:04:32.264368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-12-09 12:04:32.274314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.458 [2024-12-09 12:04:32.274358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.458 [2024-12-09 12:04:32.274368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.458 [2024-12-09 12:04:32.274372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.458 [2024-12-09 12:04:32.274377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.458 [2024-12-09 12:04:32.274387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-12-09 12:04:32.284346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.458 [2024-12-09 12:04:32.284387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.458 [2024-12-09 12:04:32.284397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.458 [2024-12-09 12:04:32.284402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.458 [2024-12-09 12:04:32.284406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.458 [2024-12-09 12:04:32.284416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-12-09 12:04:32.294433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.458 [2024-12-09 12:04:32.294508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.458 [2024-12-09 12:04:32.294517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.458 [2024-12-09 12:04:32.294522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.458 [2024-12-09 12:04:32.294529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.458 [2024-12-09 12:04:32.294540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-12-09 12:04:32.304437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.458 [2024-12-09 12:04:32.304484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.458 [2024-12-09 12:04:32.304494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.458 [2024-12-09 12:04:32.304498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.458 [2024-12-09 12:04:32.304503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.458 [2024-12-09 12:04:32.304512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-12-09 12:04:32.314433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.458 [2024-12-09 12:04:32.314483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.458 [2024-12-09 12:04:32.314493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.458 [2024-12-09 12:04:32.314498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.458 [2024-12-09 12:04:32.314502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.458 [2024-12-09 12:04:32.314513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-12-09 12:04:32.324459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.458 [2024-12-09 12:04:32.324500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.458 [2024-12-09 12:04:32.324510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.458 [2024-12-09 12:04:32.324514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.458 [2024-12-09 12:04:32.324519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.458 [2024-12-09 12:04:32.324529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-12-09 12:04:32.334514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.458 [2024-12-09 12:04:32.334585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.458 [2024-12-09 12:04:32.334595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.458 [2024-12-09 12:04:32.334599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.458 [2024-12-09 12:04:32.334604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.458 [2024-12-09 12:04:32.334613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.721 [2024-12-09 12:04:32.344516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.721 [2024-12-09 12:04:32.344566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.721 [2024-12-09 12:04:32.344576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.721 [2024-12-09 12:04:32.344581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.721 [2024-12-09 12:04:32.344586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.721 [2024-12-09 12:04:32.344596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.721 qpair failed and we were unable to recover it. 00:29:24.721 [2024-12-09 12:04:32.354502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.721 [2024-12-09 12:04:32.354550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.721 [2024-12-09 12:04:32.354560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.721 [2024-12-09 12:04:32.354564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.721 [2024-12-09 12:04:32.354569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.721 [2024-12-09 12:04:32.354579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.721 qpair failed and we were unable to recover it. 00:29:24.721 [2024-12-09 12:04:32.364568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.721 [2024-12-09 12:04:32.364626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.721 [2024-12-09 12:04:32.364636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.721 [2024-12-09 12:04:32.364644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.721 [2024-12-09 12:04:32.364648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.722 [2024-12-09 12:04:32.364658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.722 qpair failed and we were unable to recover it. 00:29:24.722 [2024-12-09 12:04:32.374651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.722 [2024-12-09 12:04:32.374717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.722 [2024-12-09 12:04:32.374727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.722 [2024-12-09 12:04:32.374732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.722 [2024-12-09 12:04:32.374736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.722 [2024-12-09 12:04:32.374746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.722 qpair failed and we were unable to recover it. 00:29:24.722 [2024-12-09 12:04:32.384596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.722 [2024-12-09 12:04:32.384676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.722 [2024-12-09 12:04:32.384688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.722 [2024-12-09 12:04:32.384693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.722 [2024-12-09 12:04:32.384697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.722 [2024-12-09 12:04:32.384707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.722 qpair failed and we were unable to recover it. 00:29:24.722 [2024-12-09 12:04:32.394658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.722 [2024-12-09 12:04:32.394738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.722 [2024-12-09 12:04:32.394747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.722 [2024-12-09 12:04:32.394752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.722 [2024-12-09 12:04:32.394756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.722 [2024-12-09 12:04:32.394766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.722 qpair failed and we were unable to recover it. 00:29:24.722 [2024-12-09 12:04:32.404667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.722 [2024-12-09 12:04:32.404709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.722 [2024-12-09 12:04:32.404719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.722 [2024-12-09 12:04:32.404724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.722 [2024-12-09 12:04:32.404728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.722 [2024-12-09 12:04:32.404738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.722 qpair failed and we were unable to recover it. 00:29:24.722 [2024-12-09 12:04:32.414734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.722 [2024-12-09 12:04:32.414785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.722 [2024-12-09 12:04:32.414794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.722 [2024-12-09 12:04:32.414799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.722 [2024-12-09 12:04:32.414803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.722 [2024-12-09 12:04:32.414813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.722 qpair failed and we were unable to recover it. 00:29:24.722 [2024-12-09 12:04:32.424727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.722 [2024-12-09 12:04:32.424769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.722 [2024-12-09 12:04:32.424779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.722 [2024-12-09 12:04:32.424787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.722 [2024-12-09 12:04:32.424791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.722 [2024-12-09 12:04:32.424802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.722 qpair failed and we were unable to recover it. 00:29:24.722 [2024-12-09 12:04:32.434733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.722 [2024-12-09 12:04:32.434821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.722 [2024-12-09 12:04:32.434831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.722 [2024-12-09 12:04:32.434835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.722 [2024-12-09 12:04:32.434840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.722 [2024-12-09 12:04:32.434849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.722 qpair failed and we were unable to recover it. 00:29:24.722 [2024-12-09 12:04:32.444763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.722 [2024-12-09 12:04:32.444806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.722 [2024-12-09 12:04:32.444816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.722 [2024-12-09 12:04:32.444821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.722 [2024-12-09 12:04:32.444825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.722 [2024-12-09 12:04:32.444835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.722 qpair failed and we were unable to recover it. 00:29:24.722 [2024-12-09 12:04:32.454840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.722 [2024-12-09 12:04:32.454888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.722 [2024-12-09 12:04:32.454897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.722 [2024-12-09 12:04:32.454902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.722 [2024-12-09 12:04:32.454906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.722 [2024-12-09 12:04:32.454916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.722 qpair failed and we were unable to recover it. 00:29:24.722 [2024-12-09 12:04:32.464813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.722 [2024-12-09 12:04:32.464856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.722 [2024-12-09 12:04:32.464866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.722 [2024-12-09 12:04:32.464871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.722 [2024-12-09 12:04:32.464875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.722 [2024-12-09 12:04:32.464885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.722 qpair failed and we were unable to recover it. 00:29:24.722 [2024-12-09 12:04:32.474873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.722 [2024-12-09 12:04:32.474917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.722 [2024-12-09 12:04:32.474927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.722 [2024-12-09 12:04:32.474932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.722 [2024-12-09 12:04:32.474936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.722 [2024-12-09 12:04:32.474946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.722 qpair failed and we were unable to recover it. 00:29:24.722 [2024-12-09 12:04:32.484880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.722 [2024-12-09 12:04:32.484925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.722 [2024-12-09 12:04:32.484935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.722 [2024-12-09 12:04:32.484939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.722 [2024-12-09 12:04:32.484944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.722 [2024-12-09 12:04:32.484953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.722 qpair failed and we were unable to recover it. 00:29:24.722 [2024-12-09 12:04:32.494984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.722 [2024-12-09 12:04:32.495033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.722 [2024-12-09 12:04:32.495042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.722 [2024-12-09 12:04:32.495047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.722 [2024-12-09 12:04:32.495051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.722 [2024-12-09 12:04:32.495061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.723 qpair failed and we were unable to recover it. 00:29:24.723 [2024-12-09 12:04:32.504952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.723 [2024-12-09 12:04:32.505040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.723 [2024-12-09 12:04:32.505049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.723 [2024-12-09 12:04:32.505054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.723 [2024-12-09 12:04:32.505058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.723 [2024-12-09 12:04:32.505068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.723 qpair failed and we were unable to recover it. 00:29:24.723 [2024-12-09 12:04:32.514842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.723 [2024-12-09 12:04:32.514891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.723 [2024-12-09 12:04:32.514900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.723 [2024-12-09 12:04:32.514905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.723 [2024-12-09 12:04:32.514909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.723 [2024-12-09 12:04:32.514919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.723 qpair failed and we were unable to recover it. 00:29:24.723 [2024-12-09 12:04:32.524994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.723 [2024-12-09 12:04:32.525036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.723 [2024-12-09 12:04:32.525045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.723 [2024-12-09 12:04:32.525050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.723 [2024-12-09 12:04:32.525054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.723 [2024-12-09 12:04:32.525064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.723 qpair failed and we were unable to recover it. 00:29:24.723 [2024-12-09 12:04:32.535054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.723 [2024-12-09 12:04:32.535104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.723 [2024-12-09 12:04:32.535114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.723 [2024-12-09 12:04:32.535118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.723 [2024-12-09 12:04:32.535122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.723 [2024-12-09 12:04:32.535132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.723 qpair failed and we were unable to recover it. 00:29:24.723 [2024-12-09 12:04:32.545056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.723 [2024-12-09 12:04:32.545103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.723 [2024-12-09 12:04:32.545113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.723 [2024-12-09 12:04:32.545118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.723 [2024-12-09 12:04:32.545123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.723 [2024-12-09 12:04:32.545133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.723 qpair failed and we were unable to recover it. 00:29:24.723 [2024-12-09 12:04:32.555062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.723 [2024-12-09 12:04:32.555107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.723 [2024-12-09 12:04:32.555117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.723 [2024-12-09 12:04:32.555125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.723 [2024-12-09 12:04:32.555129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.723 [2024-12-09 12:04:32.555139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.723 qpair failed and we were unable to recover it. 00:29:24.723 [2024-12-09 12:04:32.565102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.723 [2024-12-09 12:04:32.565145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.723 [2024-12-09 12:04:32.565155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.723 [2024-12-09 12:04:32.565159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.723 [2024-12-09 12:04:32.565164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.723 [2024-12-09 12:04:32.565174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.723 qpair failed and we were unable to recover it. 00:29:24.723 [2024-12-09 12:04:32.575183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.723 [2024-12-09 12:04:32.575256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.723 [2024-12-09 12:04:32.575265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.723 [2024-12-09 12:04:32.575270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.723 [2024-12-09 12:04:32.575274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.723 [2024-12-09 12:04:32.575284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.723 qpair failed and we were unable to recover it. 00:29:24.723 [2024-12-09 12:04:32.585174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.723 [2024-12-09 12:04:32.585226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.723 [2024-12-09 12:04:32.585235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.723 [2024-12-09 12:04:32.585240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.723 [2024-12-09 12:04:32.585244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.723 [2024-12-09 12:04:32.585254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.723 qpair failed and we were unable to recover it. 00:29:24.723 [2024-12-09 12:04:32.595186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.723 [2024-12-09 12:04:32.595231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.723 [2024-12-09 12:04:32.595241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.723 [2024-12-09 12:04:32.595246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.723 [2024-12-09 12:04:32.595251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.723 [2024-12-09 12:04:32.595264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.723 qpair failed and we were unable to recover it. 00:29:24.986 [2024-12-09 12:04:32.605218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.986 [2024-12-09 12:04:32.605260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.986 [2024-12-09 12:04:32.605270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.986 [2024-12-09 12:04:32.605275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.986 [2024-12-09 12:04:32.605279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.986 [2024-12-09 12:04:32.605289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.986 qpair failed and we were unable to recover it. 00:29:24.986 [2024-12-09 12:04:32.615262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.986 [2024-12-09 12:04:32.615311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.986 [2024-12-09 12:04:32.615321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.986 [2024-12-09 12:04:32.615326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.986 [2024-12-09 12:04:32.615330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.986 [2024-12-09 12:04:32.615341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.986 qpair failed and we were unable to recover it. 00:29:24.986 [2024-12-09 12:04:32.625273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.986 [2024-12-09 12:04:32.625321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.986 [2024-12-09 12:04:32.625331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.986 [2024-12-09 12:04:32.625336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.986 [2024-12-09 12:04:32.625341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.986 [2024-12-09 12:04:32.625351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.986 qpair failed and we were unable to recover it. 00:29:24.986 [2024-12-09 12:04:32.635291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.986 [2024-12-09 12:04:32.635345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.986 [2024-12-09 12:04:32.635355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.986 [2024-12-09 12:04:32.635359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.986 [2024-12-09 12:04:32.635364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.986 [2024-12-09 12:04:32.635373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.986 qpair failed and we were unable to recover it. 00:29:24.986 [2024-12-09 12:04:32.645321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.986 [2024-12-09 12:04:32.645383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.986 [2024-12-09 12:04:32.645393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.986 [2024-12-09 12:04:32.645398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.987 [2024-12-09 12:04:32.645403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.987 [2024-12-09 12:04:32.645412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.987 qpair failed and we were unable to recover it. 00:29:24.987 [2024-12-09 12:04:32.655387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.987 [2024-12-09 12:04:32.655440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.987 [2024-12-09 12:04:32.655458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.987 [2024-12-09 12:04:32.655464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.987 [2024-12-09 12:04:32.655469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.987 [2024-12-09 12:04:32.655483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.987 qpair failed and we were unable to recover it. 00:29:24.987 [2024-12-09 12:04:32.665376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.987 [2024-12-09 12:04:32.665432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.987 [2024-12-09 12:04:32.665450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.987 [2024-12-09 12:04:32.665456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.987 [2024-12-09 12:04:32.665461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.987 [2024-12-09 12:04:32.665475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.987 qpair failed and we were unable to recover it. 00:29:24.987 [2024-12-09 12:04:32.675382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.987 [2024-12-09 12:04:32.675435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.987 [2024-12-09 12:04:32.675454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.987 [2024-12-09 12:04:32.675460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.987 [2024-12-09 12:04:32.675464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.987 [2024-12-09 12:04:32.675478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.987 qpair failed and we were unable to recover it. 00:29:24.987 [2024-12-09 12:04:32.685403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.987 [2024-12-09 12:04:32.685447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.987 [2024-12-09 12:04:32.685461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.987 [2024-12-09 12:04:32.685466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.987 [2024-12-09 12:04:32.685471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.987 [2024-12-09 12:04:32.685482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.987 qpair failed and we were unable to recover it. 00:29:24.987 [2024-12-09 12:04:32.695498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.987 [2024-12-09 12:04:32.695550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.987 [2024-12-09 12:04:32.695560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.987 [2024-12-09 12:04:32.695565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.987 [2024-12-09 12:04:32.695569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.987 [2024-12-09 12:04:32.695579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.987 qpair failed and we were unable to recover it. 00:29:24.987 [2024-12-09 12:04:32.705489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.987 [2024-12-09 12:04:32.705532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.987 [2024-12-09 12:04:32.705542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.987 [2024-12-09 12:04:32.705547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.987 [2024-12-09 12:04:32.705551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.987 [2024-12-09 12:04:32.705561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.987 qpair failed and we were unable to recover it. 00:29:24.987 [2024-12-09 12:04:32.715502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.987 [2024-12-09 12:04:32.715543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.987 [2024-12-09 12:04:32.715553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.987 [2024-12-09 12:04:32.715558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.987 [2024-12-09 12:04:32.715562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.987 [2024-12-09 12:04:32.715572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.987 qpair failed and we were unable to recover it. 00:29:24.987 [2024-12-09 12:04:32.725524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.987 [2024-12-09 12:04:32.725568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.987 [2024-12-09 12:04:32.725578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.987 [2024-12-09 12:04:32.725583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.987 [2024-12-09 12:04:32.725587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.987 [2024-12-09 12:04:32.725603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.987 qpair failed and we were unable to recover it. 00:29:24.987 [2024-12-09 12:04:32.735594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.987 [2024-12-09 12:04:32.735651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.987 [2024-12-09 12:04:32.735662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.987 [2024-12-09 12:04:32.735667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.987 [2024-12-09 12:04:32.735671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.987 [2024-12-09 12:04:32.735681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.987 qpair failed and we were unable to recover it. 00:29:24.987 [2024-12-09 12:04:32.745630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.987 [2024-12-09 12:04:32.745709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.987 [2024-12-09 12:04:32.745719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.987 [2024-12-09 12:04:32.745724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.987 [2024-12-09 12:04:32.745728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.987 [2024-12-09 12:04:32.745738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.987 qpair failed and we were unable to recover it. 00:29:24.987 [2024-12-09 12:04:32.755483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.987 [2024-12-09 12:04:32.755525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.987 [2024-12-09 12:04:32.755537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.987 [2024-12-09 12:04:32.755542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.987 [2024-12-09 12:04:32.755546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.987 [2024-12-09 12:04:32.755557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.987 qpair failed and we were unable to recover it. 00:29:24.987 [2024-12-09 12:04:32.765614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.987 [2024-12-09 12:04:32.765659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.987 [2024-12-09 12:04:32.765669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.987 [2024-12-09 12:04:32.765674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.987 [2024-12-09 12:04:32.765678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.987 [2024-12-09 12:04:32.765688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.987 qpair failed and we were unable to recover it. 00:29:24.987 [2024-12-09 12:04:32.775709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.987 [2024-12-09 12:04:32.775757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.987 [2024-12-09 12:04:32.775766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.987 [2024-12-09 12:04:32.775771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.987 [2024-12-09 12:04:32.775776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.987 [2024-12-09 12:04:32.775786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.987 qpair failed and we were unable to recover it. 00:29:24.988 [2024-12-09 12:04:32.785579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.988 [2024-12-09 12:04:32.785627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.988 [2024-12-09 12:04:32.785640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.988 [2024-12-09 12:04:32.785646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.988 [2024-12-09 12:04:32.785650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.988 [2024-12-09 12:04:32.785660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.988 qpair failed and we were unable to recover it. 00:29:24.988 [2024-12-09 12:04:32.795734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.988 [2024-12-09 12:04:32.795780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.988 [2024-12-09 12:04:32.795789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.988 [2024-12-09 12:04:32.795794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.988 [2024-12-09 12:04:32.795798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.988 [2024-12-09 12:04:32.795809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.988 qpair failed and we were unable to recover it. 00:29:24.988 [2024-12-09 12:04:32.805777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.988 [2024-12-09 12:04:32.805844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.988 [2024-12-09 12:04:32.805853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.988 [2024-12-09 12:04:32.805858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.988 [2024-12-09 12:04:32.805863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.988 [2024-12-09 12:04:32.805873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.988 qpair failed and we were unable to recover it. 00:29:24.988 [2024-12-09 12:04:32.815805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.988 [2024-12-09 12:04:32.815856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.988 [2024-12-09 12:04:32.815868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.988 [2024-12-09 12:04:32.815873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.988 [2024-12-09 12:04:32.815878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.988 [2024-12-09 12:04:32.815888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.988 qpair failed and we were unable to recover it. 00:29:24.988 [2024-12-09 12:04:32.825832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.988 [2024-12-09 12:04:32.825900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.988 [2024-12-09 12:04:32.825910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.988 [2024-12-09 12:04:32.825915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.988 [2024-12-09 12:04:32.825919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.988 [2024-12-09 12:04:32.825929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.988 qpair failed and we were unable to recover it. 00:29:24.988 [2024-12-09 12:04:32.835839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.988 [2024-12-09 12:04:32.835887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.988 [2024-12-09 12:04:32.835897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.988 [2024-12-09 12:04:32.835902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.988 [2024-12-09 12:04:32.835906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.988 [2024-12-09 12:04:32.835917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.988 qpair failed and we were unable to recover it. 00:29:24.988 [2024-12-09 12:04:32.845867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.988 [2024-12-09 12:04:32.845908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.988 [2024-12-09 12:04:32.845918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.988 [2024-12-09 12:04:32.845923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.988 [2024-12-09 12:04:32.845928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.988 [2024-12-09 12:04:32.845938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.988 qpair failed and we were unable to recover it. 00:29:24.988 [2024-12-09 12:04:32.855907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.988 [2024-12-09 12:04:32.855994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.988 [2024-12-09 12:04:32.856004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.988 [2024-12-09 12:04:32.856008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.988 [2024-12-09 12:04:32.856015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.988 [2024-12-09 12:04:32.856025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.988 qpair failed and we were unable to recover it. 00:29:24.988 [2024-12-09 12:04:32.865928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.988 [2024-12-09 12:04:32.865970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.988 [2024-12-09 12:04:32.865980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.988 [2024-12-09 12:04:32.865985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.988 [2024-12-09 12:04:32.865989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:24.988 [2024-12-09 12:04:32.865999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.988 qpair failed and we were unable to recover it. 00:29:25.249 [2024-12-09 12:04:32.875969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.250 [2024-12-09 12:04:32.876014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.250 [2024-12-09 12:04:32.876023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.250 [2024-12-09 12:04:32.876028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.250 [2024-12-09 12:04:32.876032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.250 [2024-12-09 12:04:32.876042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.250 qpair failed and we were unable to recover it. 00:29:25.250 [2024-12-09 12:04:32.885972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.250 [2024-12-09 12:04:32.886033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.250 [2024-12-09 12:04:32.886043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.250 [2024-12-09 12:04:32.886048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.250 [2024-12-09 12:04:32.886052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.250 [2024-12-09 12:04:32.886062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.250 qpair failed and we were unable to recover it. 00:29:25.250 [2024-12-09 12:04:32.896055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.250 [2024-12-09 12:04:32.896144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.250 [2024-12-09 12:04:32.896154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.250 [2024-12-09 12:04:32.896159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.250 [2024-12-09 12:04:32.896163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.250 [2024-12-09 12:04:32.896173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.250 qpair failed and we were unable to recover it. 00:29:25.250 [2024-12-09 12:04:32.906022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.250 [2024-12-09 12:04:32.906071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.250 [2024-12-09 12:04:32.906080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.250 [2024-12-09 12:04:32.906085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.250 [2024-12-09 12:04:32.906089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.250 [2024-12-09 12:04:32.906099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.250 qpair failed and we were unable to recover it. 00:29:25.250 [2024-12-09 12:04:32.916049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.250 [2024-12-09 12:04:32.916093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.250 [2024-12-09 12:04:32.916102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.250 [2024-12-09 12:04:32.916107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.250 [2024-12-09 12:04:32.916111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.250 [2024-12-09 12:04:32.916121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.250 qpair failed and we were unable to recover it. 00:29:25.250 [2024-12-09 12:04:32.926050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.250 [2024-12-09 12:04:32.926092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.250 [2024-12-09 12:04:32.926102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.250 [2024-12-09 12:04:32.926107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.250 [2024-12-09 12:04:32.926111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.250 [2024-12-09 12:04:32.926121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.250 qpair failed and we were unable to recover it. 00:29:25.250 [2024-12-09 12:04:32.936128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.250 [2024-12-09 12:04:32.936176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.250 [2024-12-09 12:04:32.936185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.250 [2024-12-09 12:04:32.936190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.250 [2024-12-09 12:04:32.936194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.250 [2024-12-09 12:04:32.936205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.250 qpair failed and we were unable to recover it. 00:29:25.250 [2024-12-09 12:04:32.946140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.250 [2024-12-09 12:04:32.946185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.250 [2024-12-09 12:04:32.946197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.250 [2024-12-09 12:04:32.946202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.250 [2024-12-09 12:04:32.946206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.250 [2024-12-09 12:04:32.946216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.250 qpair failed and we were unable to recover it. 00:29:25.250 [2024-12-09 12:04:32.956191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.250 [2024-12-09 12:04:32.956269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.250 [2024-12-09 12:04:32.956278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.250 [2024-12-09 12:04:32.956283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.250 [2024-12-09 12:04:32.956287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.250 [2024-12-09 12:04:32.956297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.250 qpair failed and we were unable to recover it. 00:29:25.250 [2024-12-09 12:04:32.966183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.250 [2024-12-09 12:04:32.966222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.250 [2024-12-09 12:04:32.966231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.250 [2024-12-09 12:04:32.966236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.250 [2024-12-09 12:04:32.966240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.250 [2024-12-09 12:04:32.966250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.250 qpair failed and we were unable to recover it. 00:29:25.250 [2024-12-09 12:04:32.976218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.250 [2024-12-09 12:04:32.976308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.250 [2024-12-09 12:04:32.976317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.250 [2024-12-09 12:04:32.976322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.250 [2024-12-09 12:04:32.976326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.250 [2024-12-09 12:04:32.976336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.250 qpair failed and we were unable to recover it. 00:29:25.250 [2024-12-09 12:04:32.986118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.250 [2024-12-09 12:04:32.986161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.250 [2024-12-09 12:04:32.986170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.250 [2024-12-09 12:04:32.986178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.250 [2024-12-09 12:04:32.986182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.250 [2024-12-09 12:04:32.986192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.250 qpair failed and we were unable to recover it. 00:29:25.250 [2024-12-09 12:04:32.996280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.250 [2024-12-09 12:04:32.996373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.250 [2024-12-09 12:04:32.996383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.250 [2024-12-09 12:04:32.996387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.250 [2024-12-09 12:04:32.996392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.250 [2024-12-09 12:04:32.996401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.250 qpair failed and we were unable to recover it. 00:29:25.250 [2024-12-09 12:04:33.006293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.251 [2024-12-09 12:04:33.006343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.251 [2024-12-09 12:04:33.006362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.251 [2024-12-09 12:04:33.006368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.251 [2024-12-09 12:04:33.006372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.251 [2024-12-09 12:04:33.006387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.251 qpair failed and we were unable to recover it. 00:29:25.251 [2024-12-09 12:04:33.016333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.251 [2024-12-09 12:04:33.016381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.251 [2024-12-09 12:04:33.016392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.251 [2024-12-09 12:04:33.016398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.251 [2024-12-09 12:04:33.016402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.251 [2024-12-09 12:04:33.016413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.251 qpair failed and we were unable to recover it. 00:29:25.251 [2024-12-09 12:04:33.026362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.251 [2024-12-09 12:04:33.026414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.251 [2024-12-09 12:04:33.026433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.251 [2024-12-09 12:04:33.026439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.251 [2024-12-09 12:04:33.026443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.251 [2024-12-09 12:04:33.026457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.251 qpair failed and we were unable to recover it. 00:29:25.251 [2024-12-09 12:04:33.036384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.251 [2024-12-09 12:04:33.036480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.251 [2024-12-09 12:04:33.036491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.251 [2024-12-09 12:04:33.036496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.251 [2024-12-09 12:04:33.036501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.251 [2024-12-09 12:04:33.036511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.251 qpair failed and we were unable to recover it. 00:29:25.251 [2024-12-09 12:04:33.046281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.251 [2024-12-09 12:04:33.046332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.251 [2024-12-09 12:04:33.046350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.251 [2024-12-09 12:04:33.046357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.251 [2024-12-09 12:04:33.046361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.251 [2024-12-09 12:04:33.046375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.251 qpair failed and we were unable to recover it. 00:29:25.251 [2024-12-09 12:04:33.056526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.251 [2024-12-09 12:04:33.056575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.251 [2024-12-09 12:04:33.056586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.251 [2024-12-09 12:04:33.056591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.251 [2024-12-09 12:04:33.056596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.251 [2024-12-09 12:04:33.056607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.251 qpair failed and we were unable to recover it. 00:29:25.251 [2024-12-09 12:04:33.066471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.251 [2024-12-09 12:04:33.066515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.251 [2024-12-09 12:04:33.066525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.251 [2024-12-09 12:04:33.066530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.251 [2024-12-09 12:04:33.066534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.251 [2024-12-09 12:04:33.066544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.251 qpair failed and we were unable to recover it. 00:29:25.251 [2024-12-09 12:04:33.076475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.251 [2024-12-09 12:04:33.076525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.251 [2024-12-09 12:04:33.076535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.251 [2024-12-09 12:04:33.076540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.251 [2024-12-09 12:04:33.076544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.251 [2024-12-09 12:04:33.076555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.251 qpair failed and we were unable to recover it. 00:29:25.251 [2024-12-09 12:04:33.086505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.251 [2024-12-09 12:04:33.086562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.251 [2024-12-09 12:04:33.086571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.251 [2024-12-09 12:04:33.086576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.251 [2024-12-09 12:04:33.086580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.251 [2024-12-09 12:04:33.086591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.251 qpair failed and we were unable to recover it. 00:29:25.251 [2024-12-09 12:04:33.096586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.251 [2024-12-09 12:04:33.096636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.251 [2024-12-09 12:04:33.096648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.251 [2024-12-09 12:04:33.096653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.251 [2024-12-09 12:04:33.096657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.251 [2024-12-09 12:04:33.096668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.251 qpair failed and we were unable to recover it. 00:29:25.251 [2024-12-09 12:04:33.106580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.251 [2024-12-09 12:04:33.106626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.251 [2024-12-09 12:04:33.106639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.251 [2024-12-09 12:04:33.106645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.251 [2024-12-09 12:04:33.106649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.251 [2024-12-09 12:04:33.106659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.251 qpair failed and we were unable to recover it. 00:29:25.251 [2024-12-09 12:04:33.116560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.251 [2024-12-09 12:04:33.116605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.251 [2024-12-09 12:04:33.116614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.251 [2024-12-09 12:04:33.116622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.251 [2024-12-09 12:04:33.116627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.251 [2024-12-09 12:04:33.116639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.251 qpair failed and we were unable to recover it. 00:29:25.251 [2024-12-09 12:04:33.126631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.251 [2024-12-09 12:04:33.126680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.251 [2024-12-09 12:04:33.126690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.251 [2024-12-09 12:04:33.126695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.251 [2024-12-09 12:04:33.126699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.251 [2024-12-09 12:04:33.126709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.251 qpair failed and we were unable to recover it. 00:29:25.513 [2024-12-09 12:04:33.136727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.513 [2024-12-09 12:04:33.136780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.513 [2024-12-09 12:04:33.136797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.513 [2024-12-09 12:04:33.136802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.513 [2024-12-09 12:04:33.136806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.513 [2024-12-09 12:04:33.136820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.513 qpair failed and we were unable to recover it. 00:29:25.513 [2024-12-09 12:04:33.146673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.513 [2024-12-09 12:04:33.146716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.513 [2024-12-09 12:04:33.146726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.513 [2024-12-09 12:04:33.146731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.513 [2024-12-09 12:04:33.146735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.513 [2024-12-09 12:04:33.146746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.513 qpair failed and we were unable to recover it. 00:29:25.513 [2024-12-09 12:04:33.156690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.513 [2024-12-09 12:04:33.156732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.513 [2024-12-09 12:04:33.156742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.513 [2024-12-09 12:04:33.156747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.513 [2024-12-09 12:04:33.156751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.513 [2024-12-09 12:04:33.156764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.513 qpair failed and we were unable to recover it. 00:29:25.513 [2024-12-09 12:04:33.166733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.514 [2024-12-09 12:04:33.166777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.514 [2024-12-09 12:04:33.166786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.514 [2024-12-09 12:04:33.166791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.514 [2024-12-09 12:04:33.166796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.514 [2024-12-09 12:04:33.166806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.514 qpair failed and we were unable to recover it. 00:29:25.514 [2024-12-09 12:04:33.176804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.514 [2024-12-09 12:04:33.176854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.514 [2024-12-09 12:04:33.176864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.514 [2024-12-09 12:04:33.176869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.514 [2024-12-09 12:04:33.176873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.514 [2024-12-09 12:04:33.176883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.514 qpair failed and we were unable to recover it. 00:29:25.514 [2024-12-09 12:04:33.186789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.514 [2024-12-09 12:04:33.186833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.514 [2024-12-09 12:04:33.186843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.514 [2024-12-09 12:04:33.186847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.514 [2024-12-09 12:04:33.186852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.514 [2024-12-09 12:04:33.186862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.514 qpair failed and we were unable to recover it. 00:29:25.514 [2024-12-09 12:04:33.196814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.514 [2024-12-09 12:04:33.196859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.514 [2024-12-09 12:04:33.196869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.514 [2024-12-09 12:04:33.196874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.514 [2024-12-09 12:04:33.196878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.514 [2024-12-09 12:04:33.196888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.514 qpair failed and we were unable to recover it. 00:29:25.514 [2024-12-09 12:04:33.206843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.514 [2024-12-09 12:04:33.206882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.514 [2024-12-09 12:04:33.206892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.514 [2024-12-09 12:04:33.206897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.514 [2024-12-09 12:04:33.206901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.514 [2024-12-09 12:04:33.206911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.514 qpair failed and we were unable to recover it. 00:29:25.514 [2024-12-09 12:04:33.216920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.514 [2024-12-09 12:04:33.216966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.514 [2024-12-09 12:04:33.216976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.514 [2024-12-09 12:04:33.216980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.514 [2024-12-09 12:04:33.216985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.514 [2024-12-09 12:04:33.216994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.514 qpair failed and we were unable to recover it. 00:29:25.514 [2024-12-09 12:04:33.226865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.514 [2024-12-09 12:04:33.226910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.514 [2024-12-09 12:04:33.226920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.514 [2024-12-09 12:04:33.226925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.514 [2024-12-09 12:04:33.226929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.514 [2024-12-09 12:04:33.226939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.514 qpair failed and we were unable to recover it. 00:29:25.514 [2024-12-09 12:04:33.236930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.514 [2024-12-09 12:04:33.236972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.514 [2024-12-09 12:04:33.236981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.514 [2024-12-09 12:04:33.236986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.514 [2024-12-09 12:04:33.236990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.514 [2024-12-09 12:04:33.237000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.514 qpair failed and we were unable to recover it. 00:29:25.514 [2024-12-09 12:04:33.246927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.514 [2024-12-09 12:04:33.246967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.514 [2024-12-09 12:04:33.246979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.514 [2024-12-09 12:04:33.246984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.514 [2024-12-09 12:04:33.246989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.514 [2024-12-09 12:04:33.246999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.514 qpair failed and we were unable to recover it. 00:29:25.514 [2024-12-09 12:04:33.257033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.514 [2024-12-09 12:04:33.257083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.514 [2024-12-09 12:04:33.257092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.514 [2024-12-09 12:04:33.257097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.514 [2024-12-09 12:04:33.257101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.514 [2024-12-09 12:04:33.257111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.514 qpair failed and we were unable to recover it. 00:29:25.514 [2024-12-09 12:04:33.266910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.514 [2024-12-09 12:04:33.266966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.514 [2024-12-09 12:04:33.266976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.514 [2024-12-09 12:04:33.266981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.514 [2024-12-09 12:04:33.266985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.514 [2024-12-09 12:04:33.266995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.514 qpair failed and we were unable to recover it. 00:29:25.514 [2024-12-09 12:04:33.277045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.514 [2024-12-09 12:04:33.277089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.514 [2024-12-09 12:04:33.277099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.514 [2024-12-09 12:04:33.277104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.514 [2024-12-09 12:04:33.277108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.514 [2024-12-09 12:04:33.277118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.514 qpair failed and we were unable to recover it. 00:29:25.514 [2024-12-09 12:04:33.287056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.514 [2024-12-09 12:04:33.287096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.514 [2024-12-09 12:04:33.287106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.514 [2024-12-09 12:04:33.287111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.514 [2024-12-09 12:04:33.287118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.514 [2024-12-09 12:04:33.287128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.514 qpair failed and we were unable to recover it. 00:29:25.514 [2024-12-09 12:04:33.297094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.514 [2024-12-09 12:04:33.297185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.514 [2024-12-09 12:04:33.297195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.514 [2024-12-09 12:04:33.297199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.515 [2024-12-09 12:04:33.297204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.515 [2024-12-09 12:04:33.297213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.515 qpair failed and we were unable to recover it. 00:29:25.515 [2024-12-09 12:04:33.307134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.515 [2024-12-09 12:04:33.307182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.515 [2024-12-09 12:04:33.307192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.515 [2024-12-09 12:04:33.307197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.515 [2024-12-09 12:04:33.307201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.515 [2024-12-09 12:04:33.307211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.515 qpair failed and we were unable to recover it. 00:29:25.515 [2024-12-09 12:04:33.317144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.515 [2024-12-09 12:04:33.317186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.515 [2024-12-09 12:04:33.317196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.515 [2024-12-09 12:04:33.317201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.515 [2024-12-09 12:04:33.317205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.515 [2024-12-09 12:04:33.317215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.515 qpair failed and we were unable to recover it. 00:29:25.515 [2024-12-09 12:04:33.327125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.515 [2024-12-09 12:04:33.327218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.515 [2024-12-09 12:04:33.327227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.515 [2024-12-09 12:04:33.327232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.515 [2024-12-09 12:04:33.327236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.515 [2024-12-09 12:04:33.327246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.515 qpair failed and we were unable to recover it. 00:29:25.515 [2024-12-09 12:04:33.337193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.515 [2024-12-09 12:04:33.337242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.515 [2024-12-09 12:04:33.337251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.515 [2024-12-09 12:04:33.337256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.515 [2024-12-09 12:04:33.337260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.515 [2024-12-09 12:04:33.337270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.515 qpair failed and we were unable to recover it. 00:29:25.515 [2024-12-09 12:04:33.347226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.515 [2024-12-09 12:04:33.347277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.515 [2024-12-09 12:04:33.347287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.515 [2024-12-09 12:04:33.347292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.515 [2024-12-09 12:04:33.347296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.515 [2024-12-09 12:04:33.347306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.515 qpair failed and we were unable to recover it. 00:29:25.515 [2024-12-09 12:04:33.357244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.515 [2024-12-09 12:04:33.357286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.515 [2024-12-09 12:04:33.357295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.515 [2024-12-09 12:04:33.357300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.515 [2024-12-09 12:04:33.357304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.515 [2024-12-09 12:04:33.357314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.515 qpair failed and we were unable to recover it. 00:29:25.515 [2024-12-09 12:04:33.367273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.515 [2024-12-09 12:04:33.367315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.515 [2024-12-09 12:04:33.367324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.515 [2024-12-09 12:04:33.367329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.515 [2024-12-09 12:04:33.367333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.515 [2024-12-09 12:04:33.367343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.515 qpair failed and we were unable to recover it. 00:29:25.515 [2024-12-09 12:04:33.377351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.515 [2024-12-09 12:04:33.377399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.515 [2024-12-09 12:04:33.377415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.515 [2024-12-09 12:04:33.377420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.515 [2024-12-09 12:04:33.377424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.515 [2024-12-09 12:04:33.377435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.515 qpair failed and we were unable to recover it. 00:29:25.515 [2024-12-09 12:04:33.387312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.515 [2024-12-09 12:04:33.387356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.515 [2024-12-09 12:04:33.387366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.515 [2024-12-09 12:04:33.387371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.515 [2024-12-09 12:04:33.387375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.515 [2024-12-09 12:04:33.387385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.515 qpair failed and we were unable to recover it. 00:29:25.778 [2024-12-09 12:04:33.397346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.778 [2024-12-09 12:04:33.397387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.778 [2024-12-09 12:04:33.397396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.778 [2024-12-09 12:04:33.397401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.778 [2024-12-09 12:04:33.397405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.778 [2024-12-09 12:04:33.397415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.778 qpair failed and we were unable to recover it. 00:29:25.778 [2024-12-09 12:04:33.407379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.778 [2024-12-09 12:04:33.407423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.778 [2024-12-09 12:04:33.407441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.778 [2024-12-09 12:04:33.407447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.778 [2024-12-09 12:04:33.407452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.778 [2024-12-09 12:04:33.407467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.778 qpair failed and we were unable to recover it. 00:29:25.778 [2024-12-09 12:04:33.417439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.778 [2024-12-09 12:04:33.417490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.778 [2024-12-09 12:04:33.417501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.778 [2024-12-09 12:04:33.417506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.778 [2024-12-09 12:04:33.417518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.778 [2024-12-09 12:04:33.417529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.778 qpair failed and we were unable to recover it. 00:29:25.778 [2024-12-09 12:04:33.427306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.778 [2024-12-09 12:04:33.427354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.778 [2024-12-09 12:04:33.427366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.778 [2024-12-09 12:04:33.427370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.778 [2024-12-09 12:04:33.427375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.778 [2024-12-09 12:04:33.427386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.778 qpair failed and we were unable to recover it. 00:29:25.778 [2024-12-09 12:04:33.437459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.778 [2024-12-09 12:04:33.437532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.778 [2024-12-09 12:04:33.437542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.778 [2024-12-09 12:04:33.437547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.778 [2024-12-09 12:04:33.437552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.778 [2024-12-09 12:04:33.437562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.778 qpair failed and we were unable to recover it. 00:29:25.778 [2024-12-09 12:04:33.447493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.778 [2024-12-09 12:04:33.447535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.778 [2024-12-09 12:04:33.447545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.778 [2024-12-09 12:04:33.447550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.778 [2024-12-09 12:04:33.447554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.778 [2024-12-09 12:04:33.447565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.778 qpair failed and we were unable to recover it. 00:29:25.778 [2024-12-09 12:04:33.457559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.778 [2024-12-09 12:04:33.457607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.778 [2024-12-09 12:04:33.457617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.778 [2024-12-09 12:04:33.457622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.778 [2024-12-09 12:04:33.457626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.778 [2024-12-09 12:04:33.457640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.778 qpair failed and we were unable to recover it. 00:29:25.778 [2024-12-09 12:04:33.467530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.778 [2024-12-09 12:04:33.467574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.778 [2024-12-09 12:04:33.467583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.778 [2024-12-09 12:04:33.467588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.778 [2024-12-09 12:04:33.467593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.778 [2024-12-09 12:04:33.467603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.778 qpair failed and we were unable to recover it. 00:29:25.778 [2024-12-09 12:04:33.477565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.778 [2024-12-09 12:04:33.477608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.778 [2024-12-09 12:04:33.477618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.778 [2024-12-09 12:04:33.477623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.778 [2024-12-09 12:04:33.477627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.778 [2024-12-09 12:04:33.477640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.778 qpair failed and we were unable to recover it. 00:29:25.778 [2024-12-09 12:04:33.487549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.778 [2024-12-09 12:04:33.487605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.778 [2024-12-09 12:04:33.487614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.778 [2024-12-09 12:04:33.487619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.779 [2024-12-09 12:04:33.487624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.779 [2024-12-09 12:04:33.487634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.779 qpair failed and we were unable to recover it. 00:29:25.779 [2024-12-09 12:04:33.497626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.779 [2024-12-09 12:04:33.497679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.779 [2024-12-09 12:04:33.497689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.779 [2024-12-09 12:04:33.497694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.779 [2024-12-09 12:04:33.497698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.779 [2024-12-09 12:04:33.497708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.779 qpair failed and we were unable to recover it. 00:29:25.779 [2024-12-09 12:04:33.507720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.779 [2024-12-09 12:04:33.507783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.779 [2024-12-09 12:04:33.507795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.779 [2024-12-09 12:04:33.507800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.779 [2024-12-09 12:04:33.507804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.779 [2024-12-09 12:04:33.507814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.779 qpair failed and we were unable to recover it. 00:29:25.779 [2024-12-09 12:04:33.517658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.779 [2024-12-09 12:04:33.517750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.779 [2024-12-09 12:04:33.517760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.779 [2024-12-09 12:04:33.517764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.779 [2024-12-09 12:04:33.517769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.779 [2024-12-09 12:04:33.517778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.779 qpair failed and we were unable to recover it. 00:29:25.779 [2024-12-09 12:04:33.527695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.779 [2024-12-09 12:04:33.527738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.779 [2024-12-09 12:04:33.527748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.779 [2024-12-09 12:04:33.527752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.779 [2024-12-09 12:04:33.527757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.779 [2024-12-09 12:04:33.527767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.779 qpair failed and we were unable to recover it. 00:29:25.779 [2024-12-09 12:04:33.537739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.779 [2024-12-09 12:04:33.537787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.779 [2024-12-09 12:04:33.537796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.779 [2024-12-09 12:04:33.537801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.779 [2024-12-09 12:04:33.537805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.779 [2024-12-09 12:04:33.537815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.779 qpair failed and we were unable to recover it. 00:29:25.779 [2024-12-09 12:04:33.547742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.779 [2024-12-09 12:04:33.547787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.779 [2024-12-09 12:04:33.547797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.779 [2024-12-09 12:04:33.547805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.779 [2024-12-09 12:04:33.547809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.779 [2024-12-09 12:04:33.547819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.779 qpair failed and we were unable to recover it. 00:29:25.779 [2024-12-09 12:04:33.557754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.779 [2024-12-09 12:04:33.557797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.779 [2024-12-09 12:04:33.557807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.779 [2024-12-09 12:04:33.557812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.779 [2024-12-09 12:04:33.557816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.779 [2024-12-09 12:04:33.557826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.779 qpair failed and we were unable to recover it. 00:29:25.779 [2024-12-09 12:04:33.567725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.779 [2024-12-09 12:04:33.567778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.779 [2024-12-09 12:04:33.567789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.779 [2024-12-09 12:04:33.567794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.779 [2024-12-09 12:04:33.567798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.779 [2024-12-09 12:04:33.567809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.779 qpair failed and we were unable to recover it. 00:29:25.779 [2024-12-09 12:04:33.577891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.779 [2024-12-09 12:04:33.577964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.779 [2024-12-09 12:04:33.577975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.779 [2024-12-09 12:04:33.577980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.779 [2024-12-09 12:04:33.577984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.779 [2024-12-09 12:04:33.577994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.779 qpair failed and we were unable to recover it. 00:29:25.779 [2024-12-09 12:04:33.587877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.779 [2024-12-09 12:04:33.587927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.779 [2024-12-09 12:04:33.587936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.779 [2024-12-09 12:04:33.587941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.779 [2024-12-09 12:04:33.587945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.779 [2024-12-09 12:04:33.587956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.779 qpair failed and we were unable to recover it. 00:29:25.779 [2024-12-09 12:04:33.597874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.779 [2024-12-09 12:04:33.597916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.779 [2024-12-09 12:04:33.597925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.779 [2024-12-09 12:04:33.597931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.779 [2024-12-09 12:04:33.597935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.779 [2024-12-09 12:04:33.597945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.779 qpair failed and we were unable to recover it. 00:29:25.779 [2024-12-09 12:04:33.607913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.779 [2024-12-09 12:04:33.607970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.779 [2024-12-09 12:04:33.607980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.779 [2024-12-09 12:04:33.607985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.779 [2024-12-09 12:04:33.607989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.779 [2024-12-09 12:04:33.607999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.779 qpair failed and we were unable to recover it. 00:29:25.779 [2024-12-09 12:04:33.617994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.779 [2024-12-09 12:04:33.618044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.779 [2024-12-09 12:04:33.618053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.779 [2024-12-09 12:04:33.618058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.779 [2024-12-09 12:04:33.618063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.779 [2024-12-09 12:04:33.618073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.779 qpair failed and we were unable to recover it. 00:29:25.779 [2024-12-09 12:04:33.627869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.780 [2024-12-09 12:04:33.627917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.780 [2024-12-09 12:04:33.627926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.780 [2024-12-09 12:04:33.627931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.780 [2024-12-09 12:04:33.627935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.780 [2024-12-09 12:04:33.627945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.780 qpair failed and we were unable to recover it. 00:29:25.780 [2024-12-09 12:04:33.637971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.780 [2024-12-09 12:04:33.638016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.780 [2024-12-09 12:04:33.638026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.780 [2024-12-09 12:04:33.638030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.780 [2024-12-09 12:04:33.638035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.780 [2024-12-09 12:04:33.638044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.780 qpair failed and we were unable to recover it. 00:29:25.780 [2024-12-09 12:04:33.648032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.780 [2024-12-09 12:04:33.648119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.780 [2024-12-09 12:04:33.648129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.780 [2024-12-09 12:04:33.648133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.780 [2024-12-09 12:04:33.648138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.780 [2024-12-09 12:04:33.648148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.780 qpair failed and we were unable to recover it. 00:29:25.780 [2024-12-09 12:04:33.658106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.780 [2024-12-09 12:04:33.658154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.780 [2024-12-09 12:04:33.658164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.780 [2024-12-09 12:04:33.658169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.780 [2024-12-09 12:04:33.658174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:25.780 [2024-12-09 12:04:33.658183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.780 qpair failed and we were unable to recover it. 00:29:26.042 [2024-12-09 12:04:33.668084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.042 [2024-12-09 12:04:33.668130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.042 [2024-12-09 12:04:33.668140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.042 [2024-12-09 12:04:33.668144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.042 [2024-12-09 12:04:33.668149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.042 [2024-12-09 12:04:33.668159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.042 qpair failed and we were unable to recover it. 00:29:26.042 [2024-12-09 12:04:33.678089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.042 [2024-12-09 12:04:33.678134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.042 [2024-12-09 12:04:33.678144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.042 [2024-12-09 12:04:33.678151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.042 [2024-12-09 12:04:33.678156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.042 [2024-12-09 12:04:33.678166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.042 qpair failed and we were unable to recover it. 00:29:26.042 [2024-12-09 12:04:33.688133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.042 [2024-12-09 12:04:33.688174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.042 [2024-12-09 12:04:33.688184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.042 [2024-12-09 12:04:33.688189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.042 [2024-12-09 12:04:33.688194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.042 [2024-12-09 12:04:33.688204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.042 qpair failed and we were unable to recover it. 00:29:26.042 [2024-12-09 12:04:33.698152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.042 [2024-12-09 12:04:33.698199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.042 [2024-12-09 12:04:33.698209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.042 [2024-12-09 12:04:33.698214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.042 [2024-12-09 12:04:33.698219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.042 [2024-12-09 12:04:33.698229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.042 qpair failed and we were unable to recover it. 00:29:26.042 [2024-12-09 12:04:33.708204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.042 [2024-12-09 12:04:33.708249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.042 [2024-12-09 12:04:33.708258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.042 [2024-12-09 12:04:33.708263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.042 [2024-12-09 12:04:33.708268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.042 [2024-12-09 12:04:33.708278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.042 qpair failed and we were unable to recover it. 00:29:26.042 [2024-12-09 12:04:33.718227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.042 [2024-12-09 12:04:33.718271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.042 [2024-12-09 12:04:33.718280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.042 [2024-12-09 12:04:33.718285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.042 [2024-12-09 12:04:33.718289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.042 [2024-12-09 12:04:33.718302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.042 qpair failed and we were unable to recover it. 00:29:26.042 [2024-12-09 12:04:33.728248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.042 [2024-12-09 12:04:33.728289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.043 [2024-12-09 12:04:33.728299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.043 [2024-12-09 12:04:33.728304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.043 [2024-12-09 12:04:33.728308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.043 [2024-12-09 12:04:33.728318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.043 qpair failed and we were unable to recover it. 00:29:26.043 [2024-12-09 12:04:33.738312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.043 [2024-12-09 12:04:33.738390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.043 [2024-12-09 12:04:33.738400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.043 [2024-12-09 12:04:33.738405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.043 [2024-12-09 12:04:33.738409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.043 [2024-12-09 12:04:33.738418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.043 qpair failed and we were unable to recover it. 00:29:26.043 [2024-12-09 12:04:33.748303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.043 [2024-12-09 12:04:33.748346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.043 [2024-12-09 12:04:33.748357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.043 [2024-12-09 12:04:33.748361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.043 [2024-12-09 12:04:33.748366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.043 [2024-12-09 12:04:33.748376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.043 qpair failed and we were unable to recover it. 00:29:26.043 [2024-12-09 12:04:33.758323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.043 [2024-12-09 12:04:33.758369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.043 [2024-12-09 12:04:33.758388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.043 [2024-12-09 12:04:33.758394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.043 [2024-12-09 12:04:33.758398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.043 [2024-12-09 12:04:33.758412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.043 qpair failed and we were unable to recover it. 00:29:26.043 [2024-12-09 12:04:33.768352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.043 [2024-12-09 12:04:33.768395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.043 [2024-12-09 12:04:33.768406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.043 [2024-12-09 12:04:33.768411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.043 [2024-12-09 12:04:33.768415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.043 [2024-12-09 12:04:33.768426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.043 qpair failed and we were unable to recover it. 00:29:26.043 [2024-12-09 12:04:33.778418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.043 [2024-12-09 12:04:33.778466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.043 [2024-12-09 12:04:33.778475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.043 [2024-12-09 12:04:33.778480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.043 [2024-12-09 12:04:33.778485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.043 [2024-12-09 12:04:33.778495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.043 qpair failed and we were unable to recover it. 00:29:26.043 [2024-12-09 12:04:33.788414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.043 [2024-12-09 12:04:33.788513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.043 [2024-12-09 12:04:33.788532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.043 [2024-12-09 12:04:33.788539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.043 [2024-12-09 12:04:33.788544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.043 [2024-12-09 12:04:33.788558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.043 qpair failed and we were unable to recover it. 00:29:26.043 [2024-12-09 12:04:33.798432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.043 [2024-12-09 12:04:33.798479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.043 [2024-12-09 12:04:33.798497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.043 [2024-12-09 12:04:33.798503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.043 [2024-12-09 12:04:33.798508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.043 [2024-12-09 12:04:33.798522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.043 qpair failed and we were unable to recover it. 00:29:26.043 [2024-12-09 12:04:33.808502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.043 [2024-12-09 12:04:33.808547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.043 [2024-12-09 12:04:33.808561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.043 [2024-12-09 12:04:33.808566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.043 [2024-12-09 12:04:33.808570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.043 [2024-12-09 12:04:33.808581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.043 qpair failed and we were unable to recover it. 00:29:26.043 [2024-12-09 12:04:33.818530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.043 [2024-12-09 12:04:33.818582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.043 [2024-12-09 12:04:33.818593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.043 [2024-12-09 12:04:33.818598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.043 [2024-12-09 12:04:33.818602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.043 [2024-12-09 12:04:33.818612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.043 qpair failed and we were unable to recover it. 00:29:26.043 [2024-12-09 12:04:33.828522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.043 [2024-12-09 12:04:33.828569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.043 [2024-12-09 12:04:33.828579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.043 [2024-12-09 12:04:33.828583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.043 [2024-12-09 12:04:33.828588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.043 [2024-12-09 12:04:33.828598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.043 qpair failed and we were unable to recover it. 00:29:26.043 [2024-12-09 12:04:33.838524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.043 [2024-12-09 12:04:33.838580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.043 [2024-12-09 12:04:33.838589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.043 [2024-12-09 12:04:33.838594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.043 [2024-12-09 12:04:33.838598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.043 [2024-12-09 12:04:33.838608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.043 qpair failed and we were unable to recover it. 00:29:26.043 [2024-12-09 12:04:33.848600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.043 [2024-12-09 12:04:33.848679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.043 [2024-12-09 12:04:33.848690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.043 [2024-12-09 12:04:33.848695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.043 [2024-12-09 12:04:33.848702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.043 [2024-12-09 12:04:33.848713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.043 qpair failed and we were unable to recover it. 00:29:26.043 [2024-12-09 12:04:33.858628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.043 [2024-12-09 12:04:33.858686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.043 [2024-12-09 12:04:33.858695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.043 [2024-12-09 12:04:33.858700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.043 [2024-12-09 12:04:33.858704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.044 [2024-12-09 12:04:33.858715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.044 qpair failed and we were unable to recover it. 00:29:26.044 [2024-12-09 12:04:33.868618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.044 [2024-12-09 12:04:33.868688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.044 [2024-12-09 12:04:33.868698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.044 [2024-12-09 12:04:33.868703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.044 [2024-12-09 12:04:33.868707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.044 [2024-12-09 12:04:33.868717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.044 qpair failed and we were unable to recover it. 00:29:26.044 [2024-12-09 12:04:33.878600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.044 [2024-12-09 12:04:33.878649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.044 [2024-12-09 12:04:33.878659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.044 [2024-12-09 12:04:33.878663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.044 [2024-12-09 12:04:33.878668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.044 [2024-12-09 12:04:33.878678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.044 qpair failed and we were unable to recover it. 00:29:26.044 [2024-12-09 12:04:33.888661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.044 [2024-12-09 12:04:33.888703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.044 [2024-12-09 12:04:33.888713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.044 [2024-12-09 12:04:33.888718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.044 [2024-12-09 12:04:33.888722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.044 [2024-12-09 12:04:33.888732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.044 qpair failed and we were unable to recover it. 00:29:26.044 [2024-12-09 12:04:33.898777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.044 [2024-12-09 12:04:33.898835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.044 [2024-12-09 12:04:33.898844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.044 [2024-12-09 12:04:33.898849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.044 [2024-12-09 12:04:33.898853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.044 [2024-12-09 12:04:33.898863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.044 qpair failed and we were unable to recover it. 00:29:26.044 [2024-12-09 12:04:33.908702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.044 [2024-12-09 12:04:33.908747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.044 [2024-12-09 12:04:33.908757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.044 [2024-12-09 12:04:33.908762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.044 [2024-12-09 12:04:33.908766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.044 [2024-12-09 12:04:33.908776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.044 qpair failed and we were unable to recover it. 00:29:26.044 [2024-12-09 12:04:33.918730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.044 [2024-12-09 12:04:33.918777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.044 [2024-12-09 12:04:33.918787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.044 [2024-12-09 12:04:33.918791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.044 [2024-12-09 12:04:33.918796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.044 [2024-12-09 12:04:33.918806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.044 qpair failed and we were unable to recover it. 00:29:26.307 [2024-12-09 12:04:33.928752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.307 [2024-12-09 12:04:33.928804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.307 [2024-12-09 12:04:33.928813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.307 [2024-12-09 12:04:33.928818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.307 [2024-12-09 12:04:33.928823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.307 [2024-12-09 12:04:33.928833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.307 qpair failed and we were unable to recover it. 00:29:26.307 [2024-12-09 12:04:33.938794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.307 [2024-12-09 12:04:33.938837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.307 [2024-12-09 12:04:33.938849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.307 [2024-12-09 12:04:33.938854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.307 [2024-12-09 12:04:33.938858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.307 [2024-12-09 12:04:33.938868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.307 qpair failed and we were unable to recover it. 00:29:26.307 [2024-12-09 12:04:33.948823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.307 [2024-12-09 12:04:33.948874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.307 [2024-12-09 12:04:33.948885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.307 [2024-12-09 12:04:33.948889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.307 [2024-12-09 12:04:33.948894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.307 [2024-12-09 12:04:33.948904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.307 qpair failed and we were unable to recover it. 00:29:26.307 [2024-12-09 12:04:33.958844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.307 [2024-12-09 12:04:33.958887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.307 [2024-12-09 12:04:33.958896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.307 [2024-12-09 12:04:33.958901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.307 [2024-12-09 12:04:33.958905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.307 [2024-12-09 12:04:33.958915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.307 qpair failed and we were unable to recover it. 00:29:26.307 [2024-12-09 12:04:33.968841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.307 [2024-12-09 12:04:33.968912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.307 [2024-12-09 12:04:33.968921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.307 [2024-12-09 12:04:33.968926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.307 [2024-12-09 12:04:33.968930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.307 [2024-12-09 12:04:33.968940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.307 qpair failed and we were unable to recover it. 00:29:26.308 [2024-12-09 12:04:33.978925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.308 [2024-12-09 12:04:33.979014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.308 [2024-12-09 12:04:33.979023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.308 [2024-12-09 12:04:33.979028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.308 [2024-12-09 12:04:33.979036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.308 [2024-12-09 12:04:33.979046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.308 qpair failed and we were unable to recover it. 00:29:26.308 [2024-12-09 12:04:33.988917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.308 [2024-12-09 12:04:33.988962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.308 [2024-12-09 12:04:33.988971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.308 [2024-12-09 12:04:33.988976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.308 [2024-12-09 12:04:33.988980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.308 [2024-12-09 12:04:33.988990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.308 qpair failed and we were unable to recover it. 00:29:26.308 [2024-12-09 12:04:33.998961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.308 [2024-12-09 12:04:33.998999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.308 [2024-12-09 12:04:33.999008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.308 [2024-12-09 12:04:33.999013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.308 [2024-12-09 12:04:33.999017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.308 [2024-12-09 12:04:33.999027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.308 qpair failed and we were unable to recover it. 00:29:26.308 [2024-12-09 12:04:34.008969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.308 [2024-12-09 12:04:34.009006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.308 [2024-12-09 12:04:34.009016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.308 [2024-12-09 12:04:34.009020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.308 [2024-12-09 12:04:34.009025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.308 [2024-12-09 12:04:34.009034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.308 qpair failed and we were unable to recover it. 00:29:26.308 [2024-12-09 12:04:34.018971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.308 [2024-12-09 12:04:34.019013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.308 [2024-12-09 12:04:34.019022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.308 [2024-12-09 12:04:34.019027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.308 [2024-12-09 12:04:34.019032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.308 [2024-12-09 12:04:34.019042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.308 qpair failed and we were unable to recover it. 00:29:26.308 [2024-12-09 12:04:34.029043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.308 [2024-12-09 12:04:34.029082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.308 [2024-12-09 12:04:34.029091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.308 [2024-12-09 12:04:34.029096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.308 [2024-12-09 12:04:34.029101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.308 [2024-12-09 12:04:34.029110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.308 qpair failed and we were unable to recover it. 00:29:26.308 [2024-12-09 12:04:34.039016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.308 [2024-12-09 12:04:34.039058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.308 [2024-12-09 12:04:34.039068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.308 [2024-12-09 12:04:34.039072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.308 [2024-12-09 12:04:34.039077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.308 [2024-12-09 12:04:34.039087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.308 qpair failed and we were unable to recover it. 00:29:26.308 [2024-12-09 12:04:34.049086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.308 [2024-12-09 12:04:34.049128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.308 [2024-12-09 12:04:34.049138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.308 [2024-12-09 12:04:34.049143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.308 [2024-12-09 12:04:34.049148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.308 [2024-12-09 12:04:34.049158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.308 qpair failed and we were unable to recover it. 00:29:26.308 [2024-12-09 12:04:34.059115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.308 [2024-12-09 12:04:34.059154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.308 [2024-12-09 12:04:34.059163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.308 [2024-12-09 12:04:34.059168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.308 [2024-12-09 12:04:34.059172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.308 [2024-12-09 12:04:34.059182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.308 qpair failed and we were unable to recover it. 00:29:26.308 [2024-12-09 12:04:34.069146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.308 [2024-12-09 12:04:34.069195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.308 [2024-12-09 12:04:34.069207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.308 [2024-12-09 12:04:34.069212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.308 [2024-12-09 12:04:34.069216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.308 [2024-12-09 12:04:34.069226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.308 qpair failed and we were unable to recover it. 00:29:26.308 [2024-12-09 12:04:34.079141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.308 [2024-12-09 12:04:34.079180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.308 [2024-12-09 12:04:34.079189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.308 [2024-12-09 12:04:34.079194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.308 [2024-12-09 12:04:34.079199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.308 [2024-12-09 12:04:34.079208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.308 qpair failed and we were unable to recover it. 00:29:26.308 [2024-12-09 12:04:34.089182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.308 [2024-12-09 12:04:34.089232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.308 [2024-12-09 12:04:34.089241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.308 [2024-12-09 12:04:34.089246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.308 [2024-12-09 12:04:34.089250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b4000b90 00:29:26.308 [2024-12-09 12:04:34.089259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.308 qpair failed and we were unable to recover it. 00:29:26.308 [2024-12-09 12:04:34.099213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.308 [2024-12-09 12:04:34.099306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.308 [2024-12-09 12:04:34.099371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.308 [2024-12-09 12:04:34.099397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.308 [2024-12-09 12:04:34.099419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x199d0c0 00:29:26.308 [2024-12-09 12:04:34.099472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.308 qpair failed and we were unable to recover it. 00:29:26.308 [2024-12-09 12:04:34.109260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.308 [2024-12-09 12:04:34.109365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.308 [2024-12-09 12:04:34.109413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.309 [2024-12-09 12:04:34.109440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.309 [2024-12-09 12:04:34.109456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x199d0c0 00:29:26.309 [2024-12-09 12:04:34.109497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.309 qpair failed and we were unable to recover it. 00:29:26.309 [2024-12-09 12:04:34.119306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.309 [2024-12-09 12:04:34.119409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.309 [2024-12-09 12:04:34.119473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.309 [2024-12-09 12:04:34.119498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.309 [2024-12-09 12:04:34.119519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06bc000b90 00:29:26.309 [2024-12-09 12:04:34.119575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:26.309 qpair failed and we were unable to recover it. 00:29:26.309 [2024-12-09 12:04:34.129304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.309 [2024-12-09 12:04:34.129397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.309 [2024-12-09 12:04:34.129437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.309 [2024-12-09 12:04:34.129459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.309 [2024-12-09 12:04:34.129478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06bc000b90 00:29:26.309 [2024-12-09 12:04:34.129522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:26.309 qpair failed and we were unable to recover it. 00:29:26.309 Read completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Read completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Read completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Read completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Write completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Read completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Read completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Read completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Write completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Read completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Write completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Write completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Read completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Read completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Write completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Write completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Read completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Write completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Write completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Write completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Read completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Read completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Read completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Read completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Read completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Write completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Write completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Read completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Read completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Write completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Write completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 Write completed with error (sct=0, sc=8) 00:29:26.309 starting I/O failed 00:29:26.309 [2024-12-09 12:04:34.130461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:26.309 [2024-12-09 12:04:34.139335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.309 [2024-12-09 12:04:34.139441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.309 [2024-12-09 12:04:34.139505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.309 [2024-12-09 12:04:34.139530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.309 [2024-12-09 12:04:34.139551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b0000b90 00:29:26.309 [2024-12-09 12:04:34.139607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:26.309 qpair failed and we were unable to recover it. 00:29:26.309 [2024-12-09 12:04:34.149356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.309 [2024-12-09 12:04:34.149441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.309 [2024-12-09 12:04:34.149475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.309 [2024-12-09 12:04:34.149493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.309 [2024-12-09 12:04:34.149509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f06b0000b90 00:29:26.309 [2024-12-09 12:04:34.149544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:26.309 qpair failed and we were unable to recover it. 00:29:26.309 [2024-12-09 12:04:34.149672] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:29:26.309 A controller has encountered a failure and is being reset. 00:29:26.309 [2024-12-09 12:04:34.149787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1992e10 (9): Bad file descriptor 00:29:26.309 Controller properly reset. 00:29:26.309 Initializing NVMe Controllers 00:29:26.309 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:26.309 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:26.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:26.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:26.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:26.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:26.309 Initialization complete. Launching workers. 00:29:26.309 Starting thread on core 1 00:29:26.309 Starting thread on core 2 00:29:26.309 Starting thread on core 3 00:29:26.309 Starting thread on core 0 00:29:26.309 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:26.309 00:29:26.309 real 0m11.409s 00:29:26.309 user 0m21.932s 00:29:26.309 sys 0m3.855s 00:29:26.309 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:26.309 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.309 ************************************ 00:29:26.309 END TEST nvmf_target_disconnect_tc2 00:29:26.309 ************************************ 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # sync 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # set +e 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # for i in {1..20} 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:29:26.571 rmmod nvme_tcp 00:29:26.571 rmmod nvme_fabrics 00:29:26.571 rmmod nvme_keyring 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # set -e 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@130 -- # return 0 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@513 -- # '[' -n 235529 ']' 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # killprocess 235529 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 235529 ']' 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 235529 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 235529 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 235529' 00:29:26.571 killing process with pid 235529 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 235529 00:29:26.571 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 235529 00:29:26.832 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:26.832 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:26.832 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:26.832 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # iptr 00:29:26.832 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:29:26.832 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:26.832 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:29:26.832 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:26.832 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # remove_spdk_ns 00:29:26.832 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.832 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.832 12:04:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.750 12:04:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:29:28.750 00:29:28.750 real 0m21.652s 00:29:28.750 user 0m49.637s 00:29:28.750 sys 0m9.887s 00:29:28.750 12:04:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:28.750 12:04:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:28.750 ************************************ 00:29:28.750 END TEST nvmf_target_disconnect 00:29:28.751 ************************************ 00:29:28.751 12:04:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:28.751 00:29:28.751 real 6m30.406s 00:29:28.751 user 11m31.236s 00:29:28.751 sys 2m13.901s 00:29:28.751 12:04:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:28.751 12:04:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.751 ************************************ 00:29:28.751 END TEST nvmf_host 00:29:28.751 ************************************ 00:29:28.751 12:04:36 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:28.751 12:04:36 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:28.751 12:04:36 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:28.751 12:04:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:28.751 12:04:36 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:28.751 12:04:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:29.012 ************************************ 00:29:29.012 START TEST nvmf_target_core_interrupt_mode 00:29:29.012 ************************************ 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:29.012 * Looking for test storage... 00:29:29.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:29.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.012 --rc genhtml_branch_coverage=1 00:29:29.012 --rc genhtml_function_coverage=1 00:29:29.012 --rc genhtml_legend=1 00:29:29.012 --rc geninfo_all_blocks=1 00:29:29.012 --rc geninfo_unexecuted_blocks=1 00:29:29.012 00:29:29.012 ' 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:29.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.012 --rc genhtml_branch_coverage=1 00:29:29.012 --rc genhtml_function_coverage=1 00:29:29.012 --rc genhtml_legend=1 00:29:29.012 --rc geninfo_all_blocks=1 00:29:29.012 --rc geninfo_unexecuted_blocks=1 00:29:29.012 00:29:29.012 ' 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:29.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.012 --rc genhtml_branch_coverage=1 00:29:29.012 --rc genhtml_function_coverage=1 00:29:29.012 --rc genhtml_legend=1 00:29:29.012 --rc geninfo_all_blocks=1 00:29:29.012 --rc geninfo_unexecuted_blocks=1 00:29:29.012 00:29:29.012 ' 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:29.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.012 --rc genhtml_branch_coverage=1 00:29:29.012 --rc genhtml_function_coverage=1 00:29:29.012 --rc genhtml_legend=1 00:29:29.012 --rc geninfo_all_blocks=1 00:29:29.012 --rc geninfo_unexecuted_blocks=1 00:29:29.012 00:29:29.012 ' 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.012 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.013 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.013 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.013 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.013 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:29.013 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:29.013 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.013 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.013 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.013 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # : 0 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # '[' 1 -eq 1 ']' 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@35 -- # NVMF_APP+=(--interrupt-mode) 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@56 -- # have_pci_nics=0 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:29.275 ************************************ 00:29:29.275 START TEST nvmf_abort 00:29:29.275 ************************************ 00:29:29.275 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:29.275 * Looking for test storage... 00:29:29.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:29.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.275 --rc genhtml_branch_coverage=1 00:29:29.275 --rc genhtml_function_coverage=1 00:29:29.275 --rc genhtml_legend=1 00:29:29.275 --rc geninfo_all_blocks=1 00:29:29.275 --rc geninfo_unexecuted_blocks=1 00:29:29.275 00:29:29.275 ' 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:29.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.275 --rc genhtml_branch_coverage=1 00:29:29.275 --rc genhtml_function_coverage=1 00:29:29.275 --rc genhtml_legend=1 00:29:29.275 --rc geninfo_all_blocks=1 00:29:29.275 --rc geninfo_unexecuted_blocks=1 00:29:29.275 00:29:29.275 ' 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:29.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.275 --rc genhtml_branch_coverage=1 00:29:29.275 --rc genhtml_function_coverage=1 00:29:29.275 --rc genhtml_legend=1 00:29:29.275 --rc geninfo_all_blocks=1 00:29:29.275 --rc geninfo_unexecuted_blocks=1 00:29:29.275 00:29:29.275 ' 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:29.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.275 --rc genhtml_branch_coverage=1 00:29:29.275 --rc genhtml_function_coverage=1 00:29:29.275 --rc genhtml_legend=1 00:29:29.275 --rc geninfo_all_blocks=1 00:29:29.275 --rc geninfo_unexecuted_blocks=1 00:29:29.275 00:29:29.275 ' 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.275 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.276 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.276 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.276 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:29.276 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:29.276 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.276 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.276 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.276 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.276 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:29:29.276 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.276 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # : 0 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # '[' 1 -eq 1 ']' 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@35 -- # NVMF_APP+=(--interrupt-mode) 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@56 -- # have_pci_nics=0 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@310 -- # xtrace_disable 00:29:29.538 12:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:37.682 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:37.682 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_devs=() 00:29:37.682 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_devs 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_net_devs=() 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@318 -- # pci_drivers=() 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@318 -- # local -A pci_drivers 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # net_devs=() 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga net_devs 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # e810=() 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga e810 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # x722=() 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga x722 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@323 -- # mlx=() 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@323 -- # local -ga mlx 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:37.683 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:37.683 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:37.683 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:37.683 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:29:37.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:37.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:29:37.683 00:29:37.683 --- 10.0.0.2 ping statistics --- 00:29:37.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.683 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:29:37.683 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:37.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:37.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:29:37.683 00:29:37.683 --- 10.0.0.1 ping statistics --- 00:29:37.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.683 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:29:37.684 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:37.684 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:29:37.684 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:37.684 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:37.684 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:37.684 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:37.684 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:37.684 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:37.684 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:37.684 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:37.684 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:37.684 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:37.684 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:37.684 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=240954 00:29:37.684 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 240954 00:29:37.684 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 240954 ']' 00:29:37.684 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.684 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:37.684 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.684 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:37.684 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:37.684 12:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:37.684 [2024-12-09 12:04:44.604131] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:37.684 [2024-12-09 12:04:44.605297] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:29:37.684 [2024-12-09 12:04:44.605353] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:37.684 [2024-12-09 12:04:44.703239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:37.684 [2024-12-09 12:04:44.754599] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.684 [2024-12-09 12:04:44.754665] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.684 [2024-12-09 12:04:44.754674] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:37.684 [2024-12-09 12:04:44.754681] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:37.684 [2024-12-09 12:04:44.754687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.684 [2024-12-09 12:04:44.756661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:37.684 [2024-12-09 12:04:44.756828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:37.684 [2024-12-09 12:04:44.756932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.684 [2024-12-09 12:04:44.838679] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:37.684 [2024-12-09 12:04:44.838739] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:37.684 [2024-12-09 12:04:44.839418] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:37.684 [2024-12-09 12:04:44.839700] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:37.684 [2024-12-09 12:04:45.453907] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:37.684 Malloc0 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:37.684 Delay0 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:37.684 [2024-12-09 12:04:45.537821] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.684 12:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:37.945 [2024-12-09 12:04:45.719822] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:40.490 Initializing NVMe Controllers 00:29:40.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:40.490 controller IO queue size 128 less than required 00:29:40.490 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:40.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:40.490 Initialization complete. Launching workers. 00:29:40.490 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28708 00:29:40.490 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28765, failed to submit 66 00:29:40.490 success 28708, unsuccessful 57, failed 0 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@122 -- # sync 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # set +e 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # for i in {1..20} 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:29:40.490 rmmod nvme_tcp 00:29:40.490 rmmod nvme_fabrics 00:29:40.490 rmmod nvme_keyring 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # set -e 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@130 -- # return 0 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 240954 ']' 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 240954 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 240954 ']' 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 240954 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 240954 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 240954' 00:29:40.490 killing process with pid 240954 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 240954 00:29:40.490 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 240954 00:29:40.490 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:40.490 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:40.490 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:40.490 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # iptr 00:29:40.490 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:29:40.490 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:40.490 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:29:40.491 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:40.491 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # remove_spdk_ns 00:29:40.491 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.491 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.491 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.408 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:29:42.408 00:29:42.408 real 0m13.163s 00:29:42.408 user 0m10.830s 00:29:42.408 sys 0m6.849s 00:29:42.408 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:42.408 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:42.408 ************************************ 00:29:42.408 END TEST nvmf_abort 00:29:42.408 ************************************ 00:29:42.408 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:42.408 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:42.408 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:42.408 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:42.408 ************************************ 00:29:42.408 START TEST nvmf_ns_hotplug_stress 00:29:42.408 ************************************ 00:29:42.408 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:42.671 * Looking for test storage... 00:29:42.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:42.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.671 --rc genhtml_branch_coverage=1 00:29:42.671 --rc genhtml_function_coverage=1 00:29:42.671 --rc genhtml_legend=1 00:29:42.671 --rc geninfo_all_blocks=1 00:29:42.671 --rc geninfo_unexecuted_blocks=1 00:29:42.671 00:29:42.671 ' 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:42.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.671 --rc genhtml_branch_coverage=1 00:29:42.671 --rc genhtml_function_coverage=1 00:29:42.671 --rc genhtml_legend=1 00:29:42.671 --rc geninfo_all_blocks=1 00:29:42.671 --rc geninfo_unexecuted_blocks=1 00:29:42.671 00:29:42.671 ' 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:42.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.671 --rc genhtml_branch_coverage=1 00:29:42.671 --rc genhtml_function_coverage=1 00:29:42.671 --rc genhtml_legend=1 00:29:42.671 --rc geninfo_all_blocks=1 00:29:42.671 --rc geninfo_unexecuted_blocks=1 00:29:42.671 00:29:42.671 ' 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:42.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.671 --rc genhtml_branch_coverage=1 00:29:42.671 --rc genhtml_function_coverage=1 00:29:42.671 --rc genhtml_legend=1 00:29:42.671 --rc geninfo_all_blocks=1 00:29:42.671 --rc geninfo_unexecuted_blocks=1 00:29:42.671 00:29:42.671 ' 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:42.671 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # : 0 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # '[' 1 -eq 1 ']' 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # NVMF_APP+=(--interrupt-mode) 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@56 -- # have_pci_nics=0 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # xtrace_disable 00:29:42.672 12:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_devs=() 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_devs 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_net_devs=() 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # pci_drivers=() 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # local -A pci_drivers 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # net_devs=() 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga net_devs 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # e810=() 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga e810 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # x722=() 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga x722 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # mlx=() 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # local -ga mlx 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:50.821 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:50.821 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.821 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:50.821 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:50.822 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:29:50.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:50.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:29:50.822 00:29:50.822 --- 10.0.0.2 ping statistics --- 00:29:50.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.822 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:50.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:50.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:29:50.822 00:29:50.822 --- 10.0.0.1 ping statistics --- 00:29:50.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.822 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=245795 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 245795 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 245795 ']' 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:50.822 12:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:50.822 [2024-12-09 12:04:57.823465] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:50.822 [2024-12-09 12:04:57.824586] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:29:50.822 [2024-12-09 12:04:57.824651] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:50.822 [2024-12-09 12:04:57.925345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:50.822 [2024-12-09 12:04:57.977624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.822 [2024-12-09 12:04:57.977691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.822 [2024-12-09 12:04:57.977700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:50.822 [2024-12-09 12:04:57.977707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:50.822 [2024-12-09 12:04:57.977713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.822 [2024-12-09 12:04:57.979509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:50.822 [2024-12-09 12:04:57.979694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:50.822 [2024-12-09 12:04:57.979736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.822 [2024-12-09 12:04:58.059388] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:50.822 [2024-12-09 12:04:58.059473] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:50.822 [2024-12-09 12:04:58.060245] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:50.822 [2024-12-09 12:04:58.060461] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:50.822 12:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:50.822 12:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:29:50.822 12:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:50.822 12:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:50.822 12:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:50.822 12:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:50.822 12:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:50.822 12:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:51.084 [2024-12-09 12:04:58.836126] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.084 12:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:51.345 12:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:51.345 [2024-12-09 12:04:59.221773] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:51.651 12:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:51.651 12:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:51.978 Malloc0 00:29:51.978 12:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:51.978 Delay0 00:29:51.978 12:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:52.261 12:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:52.521 NULL1 00:29:52.521 12:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:52.781 12:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=246343 00:29:52.781 12:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:29:52.781 12:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:52.781 12:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:52.781 12:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:53.041 12:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:53.041 12:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:53.041 true 00:29:53.301 12:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:29:53.301 12:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:53.301 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:53.562 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:53.562 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:53.821 true 00:29:53.822 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:29:53.822 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:54.081 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:54.081 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:54.081 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:54.341 true 00:29:54.341 12:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:29:54.341 12:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:54.601 12:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:54.601 12:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:54.601 12:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:54.861 true 00:29:54.861 12:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:29:54.861 12:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:55.120 12:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:55.380 12:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:55.380 12:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:55.380 true 00:29:55.380 12:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:29:55.380 12:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:55.641 12:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:55.901 12:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:55.901 12:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:55.901 true 00:29:56.161 12:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:29:56.161 12:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.161 12:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:56.421 12:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:56.421 12:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:56.681 true 00:29:56.681 12:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:29:56.681 12:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.681 12:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:56.941 12:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:56.941 12:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:57.201 true 00:29:57.201 12:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:29:57.201 12:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.459 12:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.459 12:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:57.459 12:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:57.718 true 00:29:57.718 12:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:29:57.718 12:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.979 12:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.979 12:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:57.979 12:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:58.238 true 00:29:58.238 12:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:29:58.238 12:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.498 12:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.759 12:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:58.759 12:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:58.759 true 00:29:58.759 12:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:29:58.759 12:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:59.021 12:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:59.281 12:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:59.281 12:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:59.281 true 00:29:59.281 12:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:29:59.281 12:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:59.542 12:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:59.803 12:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:59.803 12:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:59.803 true 00:30:00.064 12:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:00.064 12:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:00.064 12:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:00.324 12:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:00.324 12:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:00.585 true 00:30:00.585 12:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:00.585 12:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:00.585 12:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:00.845 12:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:00.845 12:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:01.105 true 00:30:01.105 12:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:01.105 12:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.365 12:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:01.365 12:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:01.365 12:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:01.625 true 00:30:01.625 12:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:01.625 12:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.024 12:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.024 12:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:02.024 12:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:02.024 true 00:30:02.291 12:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:02.291 12:05:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.291 12:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.552 12:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:02.552 12:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:02.813 true 00:30:02.813 12:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:02.813 12:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.813 12:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.073 12:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:03.073 12:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:03.333 true 00:30:03.333 12:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:03.333 12:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.593 12:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.593 12:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:03.593 12:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:03.854 true 00:30:03.854 12:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:03.854 12:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.115 12:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.115 12:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:04.115 12:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:04.374 true 00:30:04.374 12:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:04.374 12:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.634 12:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.634 12:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:04.634 12:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:04.895 true 00:30:04.895 12:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:04.895 12:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.155 12:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:05.415 12:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:05.415 12:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:05.415 true 00:30:05.415 12:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:05.415 12:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.676 12:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:05.937 12:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:05.937 12:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:05.937 true 00:30:05.937 12:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:05.937 12:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:06.198 12:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:06.459 12:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:06.459 12:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:06.459 true 00:30:06.720 12:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:06.720 12:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:06.720 12:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:06.980 12:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:06.980 12:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:07.241 true 00:30:07.241 12:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:07.241 12:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.241 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:07.502 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:07.502 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:07.764 true 00:30:07.764 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:07.764 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.025 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.025 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:08.025 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:08.286 true 00:30:08.286 12:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:08.286 12:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.550 12:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.550 12:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:08.550 12:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:08.810 true 00:30:08.810 12:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:08.810 12:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.071 12:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:09.332 12:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:09.332 12:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:09.332 true 00:30:09.332 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:09.332 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.593 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:09.854 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:09.854 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:09.854 true 00:30:09.854 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:09.854 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.115 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.376 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:10.376 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:10.376 true 00:30:10.637 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:10.637 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.638 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.899 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:30:10.899 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:30:11.160 true 00:30:11.160 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:11.160 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.160 12:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:11.422 12:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:30:11.422 12:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:30:11.682 true 00:30:11.682 12:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:11.682 12:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.943 12:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:11.943 12:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:30:11.943 12:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:30:12.204 true 00:30:12.204 12:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:12.204 12:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.464 12:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:12.465 12:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:30:12.465 12:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:30:12.726 true 00:30:12.726 12:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:12.726 12:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.987 12:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:13.249 12:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:30:13.249 12:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:30:13.249 true 00:30:13.249 12:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:13.249 12:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.509 12:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:13.770 12:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:30:13.770 12:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:30:13.770 true 00:30:13.770 12:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:13.770 12:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.030 12:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.290 12:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:30:14.290 12:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:30:14.290 true 00:30:14.290 12:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:14.290 12:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.551 12:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.811 12:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:30:14.811 12:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:30:14.811 true 00:30:15.071 12:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:15.071 12:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.071 12:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:15.331 12:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:30:15.331 12:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:30:15.592 true 00:30:15.592 12:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:15.592 12:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.592 12:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:15.852 12:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:30:15.852 12:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:30:16.113 true 00:30:16.113 12:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:16.113 12:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.374 12:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.374 12:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:30:16.374 12:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:30:16.635 true 00:30:16.635 12:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:16.635 12:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.895 12:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.895 12:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:30:16.895 12:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:30:17.155 true 00:30:17.155 12:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:17.155 12:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.414 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:17.675 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:30:17.675 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:30:17.675 true 00:30:17.675 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:17.675 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.935 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.194 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:30:18.194 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:30:18.194 true 00:30:18.194 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:18.194 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.454 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.714 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:30:18.714 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:30:18.973 true 00:30:18.973 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:18.973 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.973 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:19.233 12:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:30:19.233 12:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:30:19.493 true 00:30:19.493 12:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:19.493 12:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.493 12:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:19.754 12:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:30:19.754 12:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:30:20.014 true 00:30:20.014 12:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:20.014 12:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.275 12:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:20.275 12:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:30:20.275 12:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:30:20.535 true 00:30:20.535 12:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:20.535 12:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.796 12:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:21.059 12:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:30:21.059 12:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:30:21.059 true 00:30:21.059 12:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:21.059 12:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.319 12:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:21.579 12:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:30:21.579 12:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:30:21.579 true 00:30:21.579 12:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:21.579 12:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.840 12:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.101 12:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:30:22.101 12:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:30:22.101 true 00:30:22.101 12:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:22.101 12:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.361 12:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.622 12:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:30:22.622 12:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:30:22.622 true 00:30:22.882 12:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:22.882 12:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.882 Initializing NVMe Controllers 00:30:22.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:22.882 Controller IO queue size 128, less than required. 00:30:22.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:22.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:22.882 Initialization complete. Launching workers. 00:30:22.882 ======================================================== 00:30:22.882 Latency(us) 00:30:22.882 Device Information : IOPS MiB/s Average min max 00:30:22.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30108.67 14.70 4251.17 1086.66 11056.73 00:30:22.883 ======================================================== 00:30:22.883 Total : 30108.67 14.70 4251.17 1086.66 11056.73 00:30:22.883 00:30:22.883 12:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:23.143 12:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:30:23.143 12:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:30:23.404 true 00:30:23.404 12:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 246343 00:30:23.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (246343) - No such process 00:30:23.404 12:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 246343 00:30:23.404 12:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:23.404 12:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:23.665 12:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:23.665 12:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:23.665 12:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:23.665 12:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:23.665 12:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:23.926 null0 00:30:23.926 12:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:23.926 12:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:23.926 12:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:23.926 null1 00:30:23.926 12:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:23.926 12:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:23.926 12:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:24.186 null2 00:30:24.186 12:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:24.186 12:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:24.186 12:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:24.186 null3 00:30:24.186 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:24.186 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:24.186 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:24.447 null4 00:30:24.447 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:24.447 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:24.447 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:24.708 null5 00:30:24.708 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:24.708 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:24.708 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:24.708 null6 00:30:24.708 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:24.708 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:24.708 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:24.970 null7 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:24.970 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:24.971 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:24.971 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:24.971 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:24.971 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:24.971 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.971 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:24.971 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:24.971 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:24.971 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:24.971 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:24.971 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:24.971 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:24.971 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.971 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:24.971 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:24.971 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:24.971 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:24.971 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 252546 252548 252552 252554 252557 252559 252562 252565 00:30:24.971 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:24.971 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:24.971 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:24.971 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:24.971 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:25.231 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.231 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:25.231 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:25.231 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:25.231 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:25.231 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:25.231 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:25.231 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:25.491 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.491 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.491 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:25.491 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.491 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.492 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:25.492 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.492 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.492 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:25.492 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.492 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.492 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:25.492 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.492 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.492 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:25.492 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.492 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.492 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:25.492 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.492 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.492 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.492 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:25.492 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.492 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:25.492 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.492 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:25.753 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:26.015 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:26.015 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.015 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.015 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:26.015 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:26.015 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:26.015 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:26.015 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:26.015 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.015 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.015 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:26.015 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:26.015 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.015 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.015 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:26.015 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:26.015 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:26.276 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.276 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.276 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:26.276 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:26.276 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.276 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.276 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:26.276 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.276 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.276 12:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:26.277 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.277 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.277 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.277 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:26.277 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.277 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:26.277 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.277 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.277 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:26.277 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:26.277 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:26.277 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:26.538 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:26.538 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.538 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.538 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:26.538 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:26.538 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:26.538 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:26.538 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.538 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.538 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:26.538 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.538 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.538 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:26.538 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.538 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.538 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:26.538 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:26.538 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.538 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.538 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:26.538 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.538 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.538 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:26.800 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.063 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.063 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.063 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:27.063 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:27.063 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.063 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.063 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:27.063 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:27.063 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.063 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.063 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:27.063 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:27.063 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:27.063 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.063 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.063 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:27.063 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:27.063 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.063 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.063 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:27.063 12:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:27.324 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.324 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.325 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:27.325 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:27.325 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.325 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.325 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:27.325 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.325 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.325 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:27.325 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.325 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.325 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.325 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:27.325 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.325 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.325 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:27.325 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:27.325 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:27.325 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:27.587 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:27.849 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.849 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.849 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:27.849 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.849 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:27.849 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:27.849 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:27.849 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:27.849 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:27.849 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.849 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.849 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:27.849 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.849 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.849 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:27.849 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:27.849 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:27.849 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:27.849 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.111 12:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:28.372 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:28.633 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:28.633 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:28.633 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.633 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.633 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:28.633 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:28.633 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.633 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.633 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:28.633 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.633 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.633 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.633 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.633 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:28.633 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.633 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.633 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.633 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.895 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.895 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.895 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.895 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:28.895 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:28.895 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:28.895 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:28.895 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # sync 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # set +e 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # for i in {1..20} 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:30:29.156 rmmod nvme_tcp 00:30:29.156 rmmod nvme_fabrics 00:30:29.156 rmmod nvme_keyring 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # set -e 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@130 -- # return 0 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 245795 ']' 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 245795 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 245795 ']' 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 245795 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 245795 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 245795' 00:30:29.156 killing process with pid 245795 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 245795 00:30:29.156 12:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 245795 00:30:29.418 12:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:29.418 12:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:29.418 12:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:29.418 12:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # iptr 00:30:29.418 12:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:30:29.418 12:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:29.418 12:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:30:29.418 12:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:29.418 12:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # remove_spdk_ns 00:30:29.418 12:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.418 12:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:29.418 12:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.336 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:30:31.336 00:30:31.336 real 0m49.009s 00:30:31.336 user 3m4.784s 00:30:31.336 sys 0m21.727s 00:30:31.336 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:31.336 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:31.336 ************************************ 00:30:31.336 END TEST nvmf_ns_hotplug_stress 00:30:31.336 ************************************ 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:31.598 ************************************ 00:30:31.598 START TEST nvmf_delete_subsystem 00:30:31.598 ************************************ 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:31.598 * Looking for test storage... 00:30:31.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:31.598 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:31.860 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:31.860 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:31.860 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:31.860 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:31.860 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:31.860 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:31.860 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:31.860 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:31.860 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:31.860 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:31.860 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:31.860 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:31.860 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:31.860 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:31.860 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:31.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.860 --rc genhtml_branch_coverage=1 00:30:31.860 --rc genhtml_function_coverage=1 00:30:31.860 --rc genhtml_legend=1 00:30:31.860 --rc geninfo_all_blocks=1 00:30:31.860 --rc geninfo_unexecuted_blocks=1 00:30:31.860 00:30:31.860 ' 00:30:31.860 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:31.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.860 --rc genhtml_branch_coverage=1 00:30:31.860 --rc genhtml_function_coverage=1 00:30:31.860 --rc genhtml_legend=1 00:30:31.860 --rc geninfo_all_blocks=1 00:30:31.860 --rc geninfo_unexecuted_blocks=1 00:30:31.860 00:30:31.860 ' 00:30:31.860 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:31.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.860 --rc genhtml_branch_coverage=1 00:30:31.860 --rc genhtml_function_coverage=1 00:30:31.860 --rc genhtml_legend=1 00:30:31.860 --rc geninfo_all_blocks=1 00:30:31.860 --rc geninfo_unexecuted_blocks=1 00:30:31.860 00:30:31.860 ' 00:30:31.860 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:31.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.860 --rc genhtml_branch_coverage=1 00:30:31.860 --rc genhtml_function_coverage=1 00:30:31.860 --rc genhtml_legend=1 00:30:31.860 --rc geninfo_all_blocks=1 00:30:31.860 --rc geninfo_unexecuted_blocks=1 00:30:31.860 00:30:31.860 ' 00:30:31.860 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:31.860 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # : 0 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # '[' 1 -eq 1 ']' 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # NVMF_APP+=(--interrupt-mode) 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@56 -- # have_pci_nics=0 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # xtrace_disable 00:30:31.861 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_devs=() 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_devs 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_net_devs=() 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # pci_drivers=() 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # local -A pci_drivers 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # net_devs=() 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga net_devs 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # e810=() 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga e810 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # x722=() 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga x722 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # mlx=() 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # local -ga mlx 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:40.010 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:40.010 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:30:40.010 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:40.011 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:40.011 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:30:40.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:40.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:30:40.011 00:30:40.011 --- 10.0.0.2 ping statistics --- 00:30:40.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.011 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:40.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:40.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:30:40.011 00:30:40.011 --- 10.0.0.1 ping statistics --- 00:30:40.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.011 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=257670 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 257670 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 257670 ']' 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:40.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:40.011 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:40.011 [2024-12-09 12:05:46.994043] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:40.011 [2024-12-09 12:05:46.995194] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:30:40.011 [2024-12-09 12:05:46.995245] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:40.011 [2024-12-09 12:05:47.096089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:40.011 [2024-12-09 12:05:47.147770] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:40.011 [2024-12-09 12:05:47.147824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:40.011 [2024-12-09 12:05:47.147833] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:40.011 [2024-12-09 12:05:47.147841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:40.011 [2024-12-09 12:05:47.147847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:40.011 [2024-12-09 12:05:47.149608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.011 [2024-12-09 12:05:47.149611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.011 [2024-12-09 12:05:47.228206] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:40.011 [2024-12-09 12:05:47.229027] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:40.011 [2024-12-09 12:05:47.229258] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:40.011 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:40.011 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:30:40.011 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:40.011 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:40.012 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:40.012 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:40.012 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:40.012 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.012 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:40.012 [2024-12-09 12:05:47.866870] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.012 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.012 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:40.012 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.012 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:40.012 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.012 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:40.012 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.012 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:40.273 [2024-12-09 12:05:47.899439] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.273 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.273 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:40.273 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.273 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:40.273 NULL1 00:30:40.273 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.273 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:40.273 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.273 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:40.273 Delay0 00:30:40.273 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.273 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.273 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.273 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:40.273 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.273 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=258019 00:30:40.273 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:40.273 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:40.273 [2024-12-09 12:05:47.998260] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:42.187 12:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:42.187 12:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.187 12:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Write completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 starting I/O failed: -6 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 starting I/O failed: -6 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Write completed with error (sct=0, sc=8) 00:30:42.449 Write completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 starting I/O failed: -6 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Write completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 starting I/O failed: -6 00:30:42.449 Write completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 starting I/O failed: -6 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Write completed with error (sct=0, sc=8) 00:30:42.449 starting I/O failed: -6 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 starting I/O failed: -6 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Write completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 starting I/O failed: -6 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 starting I/O failed: -6 00:30:42.449 Write completed with error (sct=0, sc=8) 00:30:42.449 Write completed with error (sct=0, sc=8) 00:30:42.449 Write completed with error (sct=0, sc=8) 00:30:42.449 Write completed with error (sct=0, sc=8) 00:30:42.449 starting I/O failed: -6 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 starting I/O failed: -6 00:30:42.449 Write completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Write completed with error (sct=0, sc=8) 00:30:42.449 starting I/O failed: -6 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Write completed with error (sct=0, sc=8) 00:30:42.449 starting I/O failed: -6 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Write completed with error (sct=0, sc=8) 00:30:42.449 starting I/O failed: -6 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.449 starting I/O failed: -6 00:30:42.449 Write completed with error (sct=0, sc=8) 00:30:42.449 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 [2024-12-09 12:05:50.135680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a2c0 is same with the state(6) to be set 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 starting I/O failed: -6 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 [2024-12-09 12:05:50.137947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3ec400d490 is same with the state(6) to be set 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.450 Write completed with error (sct=0, sc=8) 00:30:42.450 Read completed with error (sct=0, sc=8) 00:30:42.451 Write completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Write completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Write completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Write completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Write completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Write completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Write completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Write completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Write completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Write completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:42.451 Read completed with error (sct=0, sc=8) 00:30:43.395 [2024-12-09 12:05:51.097929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b9b0 is same with the state(6) to be set 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 [2024-12-09 12:05:51.138095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3ec400d020 is same with the state(6) to be set 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 [2024-12-09 12:05:51.140591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3ec400d7c0 is same with the state(6) to be set 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 [2024-12-09 12:05:51.141204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a4a0 is same with the state(6) to be set 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Read completed with error (sct=0, sc=8) 00:30:43.395 Write completed with error (sct=0, sc=8) 00:30:43.396 Read completed with error (sct=0, sc=8) 00:30:43.396 Read completed with error (sct=0, sc=8) 00:30:43.396 Read completed with error (sct=0, sc=8) 00:30:43.396 Read completed with error (sct=0, sc=8) 00:30:43.396 Read completed with error (sct=0, sc=8) 00:30:43.396 Read completed with error (sct=0, sc=8) 00:30:43.396 Write completed with error (sct=0, sc=8) 00:30:43.396 Read completed with error (sct=0, sc=8) 00:30:43.396 Read completed with error (sct=0, sc=8) 00:30:43.396 Write completed with error (sct=0, sc=8) 00:30:43.396 [2024-12-09 12:05:51.141366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a860 is same with the state(6) to be set 00:30:43.396 Initializing NVMe Controllers 00:30:43.396 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:43.396 Controller IO queue size 128, less than required. 00:30:43.396 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:43.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:43.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:43.396 Initialization complete. Launching workers. 00:30:43.396 ======================================================== 00:30:43.396 Latency(us) 00:30:43.396 Device Information : IOPS MiB/s Average min max 00:30:43.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 191.51 0.09 889877.76 314.89 1010721.70 00:30:43.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 147.24 0.07 1069310.99 240.91 2003440.69 00:30:43.396 ======================================================== 00:30:43.396 Total : 338.75 0.17 967869.30 240.91 2003440.69 00:30:43.396 00:30:43.396 [2024-12-09 12:05:51.141906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0b9b0 (9): Bad file descriptor 00:30:43.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:43.396 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.396 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:30:43.396 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 258019 00:30:43.396 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 258019 00:30:43.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (258019) - No such process 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 258019 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 258019 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 258019 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:43.970 [2024-12-09 12:05:51.675303] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=258689 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 258689 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:43.970 12:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:43.970 [2024-12-09 12:05:51.751269] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:44.542 12:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:44.542 12:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 258689 00:30:44.542 12:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:45.113 12:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:45.113 12:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 258689 00:30:45.113 12:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:45.373 12:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:45.373 12:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 258689 00:30:45.373 12:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:45.943 12:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:45.943 12:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 258689 00:30:45.943 12:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:46.514 12:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:46.514 12:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 258689 00:30:46.514 12:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:47.084 12:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:47.085 12:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 258689 00:30:47.085 12:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:47.346 Initializing NVMe Controllers 00:30:47.346 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:47.346 Controller IO queue size 128, less than required. 00:30:47.346 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:47.346 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:47.346 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:47.346 Initialization complete. Launching workers. 00:30:47.346 ======================================================== 00:30:47.346 Latency(us) 00:30:47.346 Device Information : IOPS MiB/s Average min max 00:30:47.346 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002212.06 1000130.08 1005488.09 00:30:47.346 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004172.32 1000196.14 1042374.01 00:30:47.346 ======================================================== 00:30:47.346 Total : 256.00 0.12 1003192.19 1000130.08 1042374.01 00:30:47.346 00:30:47.346 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:47.346 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 258689 00:30:47.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (258689) - No such process 00:30:47.346 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 258689 00:30:47.346 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:47.346 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:47.346 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:47.346 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # sync 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # set +e 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # for i in {1..20} 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:30:47.606 rmmod nvme_tcp 00:30:47.606 rmmod nvme_fabrics 00:30:47.606 rmmod nvme_keyring 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # set -e 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@130 -- # return 0 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 257670 ']' 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 257670 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 257670 ']' 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 257670 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 257670 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 257670' 00:30:47.606 killing process with pid 257670 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 257670 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 257670 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # iptr 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # remove_spdk_ns 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:47.606 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:30:50.152 00:30:50.152 real 0m18.259s 00:30:50.152 user 0m26.606s 00:30:50.152 sys 0m7.407s 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:50.152 ************************************ 00:30:50.152 END TEST nvmf_delete_subsystem 00:30:50.152 ************************************ 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:50.152 ************************************ 00:30:50.152 START TEST nvmf_host_management 00:30:50.152 ************************************ 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:50.152 * Looking for test storage... 00:30:50.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:50.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.152 --rc genhtml_branch_coverage=1 00:30:50.152 --rc genhtml_function_coverage=1 00:30:50.152 --rc genhtml_legend=1 00:30:50.152 --rc geninfo_all_blocks=1 00:30:50.152 --rc geninfo_unexecuted_blocks=1 00:30:50.152 00:30:50.152 ' 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:50.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.152 --rc genhtml_branch_coverage=1 00:30:50.152 --rc genhtml_function_coverage=1 00:30:50.152 --rc genhtml_legend=1 00:30:50.152 --rc geninfo_all_blocks=1 00:30:50.152 --rc geninfo_unexecuted_blocks=1 00:30:50.152 00:30:50.152 ' 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:50.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.152 --rc genhtml_branch_coverage=1 00:30:50.152 --rc genhtml_function_coverage=1 00:30:50.152 --rc genhtml_legend=1 00:30:50.152 --rc geninfo_all_blocks=1 00:30:50.152 --rc geninfo_unexecuted_blocks=1 00:30:50.152 00:30:50.152 ' 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:50.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.152 --rc genhtml_branch_coverage=1 00:30:50.152 --rc genhtml_function_coverage=1 00:30:50.152 --rc genhtml_legend=1 00:30:50.152 --rc geninfo_all_blocks=1 00:30:50.152 --rc geninfo_unexecuted_blocks=1 00:30:50.152 00:30:50.152 ' 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:50.152 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # : 0 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # '[' 1 -eq 1 ']' 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@35 -- # NVMF_APP+=(--interrupt-mode) 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@56 -- # have_pci_nics=0 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@310 -- # xtrace_disable 00:30:50.153 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_devs=() 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_devs 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_net_devs=() 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@318 -- # pci_drivers=() 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@318 -- # local -A pci_drivers 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # net_devs=() 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga net_devs 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # e810=() 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga e810 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # x722=() 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga x722 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@323 -- # mlx=() 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@323 -- # local -ga mlx 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:58.299 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:58.299 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:58.299 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:58.299 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:58.299 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:30:58.300 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:58.300 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:58.300 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:58.300 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:58.300 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:58.300 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:58.300 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:58.300 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:30:58.300 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:58.300 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:58.300 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:30:58.300 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:30:58.300 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:58.300 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:58.300 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:30:58.300 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:30:58.300 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:30:58.300 12:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:30:58.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:58.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:30:58.300 00:30:58.300 --- 10.0.0.2 ping statistics --- 00:30:58.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:58.300 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:58.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:58.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:30:58.300 00:30:58.300 --- 10.0.0.1 ping statistics --- 00:30:58.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:58.300 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=263563 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 263563 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 263563 ']' 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:58.300 12:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:58.300 [2024-12-09 12:06:05.272377] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:58.300 [2024-12-09 12:06:05.273551] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:30:58.300 [2024-12-09 12:06:05.273604] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:58.300 [2024-12-09 12:06:05.374348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:58.300 [2024-12-09 12:06:05.429812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:58.300 [2024-12-09 12:06:05.429867] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:58.300 [2024-12-09 12:06:05.429875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:58.300 [2024-12-09 12:06:05.429882] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:58.300 [2024-12-09 12:06:05.429888] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:58.300 [2024-12-09 12:06:05.431902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:58.300 [2024-12-09 12:06:05.432070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:58.300 [2024-12-09 12:06:05.432233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.300 [2024-12-09 12:06:05.432234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:58.300 [2024-12-09 12:06:05.512609] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:58.300 [2024-12-09 12:06:05.513343] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:58.300 [2024-12-09 12:06:05.513879] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:58.300 [2024-12-09 12:06:05.514236] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:58.300 [2024-12-09 12:06:05.514367] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:58.300 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:58.300 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:58.300 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:58.300 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:58.300 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:58.300 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:58.300 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:58.300 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.300 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:58.300 [2024-12-09 12:06:06.129132] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:58.300 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.300 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:58.300 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:58.300 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:58.300 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:58.300 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:58.300 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:58.300 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.300 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:58.562 Malloc0 00:30:58.562 [2024-12-09 12:06:06.225169] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.562 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.562 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:58.562 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:58.562 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:58.562 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=263846 00:30:58.562 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 263846 /var/tmp/bdevperf.sock 00:30:58.562 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 263846 ']' 00:30:58.562 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:58.562 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:58.562 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:58.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:58.562 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:58.562 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:58.562 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:58.562 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:58.562 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:30:58.562 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:30:58.563 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:30:58.563 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:30:58.563 { 00:30:58.563 "params": { 00:30:58.563 "name": "Nvme$subsystem", 00:30:58.563 "trtype": "$TEST_TRANSPORT", 00:30:58.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:58.563 "adrfam": "ipv4", 00:30:58.563 "trsvcid": "$NVMF_PORT", 00:30:58.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:58.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:58.563 "hdgst": ${hdgst:-false}, 00:30:58.563 "ddgst": ${ddgst:-false} 00:30:58.563 }, 00:30:58.563 "method": "bdev_nvme_attach_controller" 00:30:58.563 } 00:30:58.563 EOF 00:30:58.563 )") 00:30:58.563 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:30:58.563 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:30:58.563 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:30:58.563 12:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:30:58.563 "params": { 00:30:58.563 "name": "Nvme0", 00:30:58.563 "trtype": "tcp", 00:30:58.563 "traddr": "10.0.0.2", 00:30:58.563 "adrfam": "ipv4", 00:30:58.563 "trsvcid": "4420", 00:30:58.563 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:58.563 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:58.563 "hdgst": false, 00:30:58.563 "ddgst": false 00:30:58.563 }, 00:30:58.563 "method": "bdev_nvme_attach_controller" 00:30:58.563 }' 00:30:58.563 [2024-12-09 12:06:06.329090] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:30:58.563 [2024-12-09 12:06:06.329147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid263846 ] 00:30:58.563 [2024-12-09 12:06:06.417508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.823 [2024-12-09 12:06:06.454549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.084 Running I/O for 10 seconds... 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.347 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:59.347 [2024-12-09 12:06:07.208747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565e20 is same with the state(6) to be set 00:30:59.347 [2024-12-09 12:06:07.208899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.347 [2024-12-09 12:06:07.208938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.347 [2024-12-09 12:06:07.208956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.347 [2024-12-09 12:06:07.208964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.347 [2024-12-09 12:06:07.208974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.347 [2024-12-09 12:06:07.208982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.347 [2024-12-09 12:06:07.208991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.347 [2024-12-09 12:06:07.208998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.347 [2024-12-09 12:06:07.209008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.347 [2024-12-09 12:06:07.209015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.347 [2024-12-09 12:06:07.209025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.347 [2024-12-09 12:06:07.209032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.347 [2024-12-09 12:06:07.209041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.347 [2024-12-09 12:06:07.209048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.347 [2024-12-09 12:06:07.209058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.347 [2024-12-09 12:06:07.209065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.347 [2024-12-09 12:06:07.209074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.347 [2024-12-09 12:06:07.209081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.347 [2024-12-09 12:06:07.209090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.347 [2024-12-09 12:06:07.209097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.347 [2024-12-09 12:06:07.209107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.347 [2024-12-09 12:06:07.209114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.347 [2024-12-09 12:06:07.209123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.347 [2024-12-09 12:06:07.209130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.347 [2024-12-09 12:06:07.209144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.347 [2024-12-09 12:06:07.209152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.347 [2024-12-09 12:06:07.209161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.347 [2024-12-09 12:06:07.209168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.347 [2024-12-09 12:06:07.209177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.347 [2024-12-09 12:06:07.209184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.347 [2024-12-09 12:06:07.209194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.347 [2024-12-09 12:06:07.209201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.347 [2024-12-09 12:06:07.209210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.347 [2024-12-09 12:06:07.209217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.347 [2024-12-09 12:06:07.209227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.347 [2024-12-09 12:06:07.209234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.347 [2024-12-09 12:06:07.209243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.347 [2024-12-09 12:06:07.209251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.347 [2024-12-09 12:06:07.209260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.347 [2024-12-09 12:06:07.209267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.347 [2024-12-09 12:06:07.209276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.347 [2024-12-09 12:06:07.209283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.348 [2024-12-09 12:06:07.209946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.348 [2024-12-09 12:06:07.209955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.349 [2024-12-09 12:06:07.209962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.349 [2024-12-09 12:06:07.209972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.349 [2024-12-09 12:06:07.209979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.349 [2024-12-09 12:06:07.209990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.349 [2024-12-09 12:06:07.209998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.349 [2024-12-09 12:06:07.210007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:59.349 [2024-12-09 12:06:07.210014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.349 [2024-12-09 12:06:07.211269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:59.349 task offset: 95616 on job bdev=Nvme0n1 fails 00:30:59.349 00:30:59.349 Latency(us) 00:30:59.349 [2024-12-09T11:06:07.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.349 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:59.349 Job: Nvme0n1 ended in about 0.49 seconds with error 00:30:59.349 Verification LBA range: start 0x0 length 0x400 00:30:59.349 Nvme0n1 : 0.49 1430.61 89.41 130.06 0.00 39925.06 1645.23 35607.89 00:30:59.349 [2024-12-09T11:06:07.235Z] =================================================================================================================== 00:30:59.349 [2024-12-09T11:06:07.235Z] Total : 1430.61 89.41 130.06 0.00 39925.06 1645.23 35607.89 00:30:59.349 [2024-12-09 12:06:07.213281] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:59.349 [2024-12-09 12:06:07.213304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c07c20 (9): Bad file descriptor 00:30:59.349 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.349 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:59.349 [2024-12-09 12:06:07.214334] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:30:59.349 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.349 [2024-12-09 12:06:07.214431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:59.349 [2024-12-09 12:06:07.214451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.349 [2024-12-09 12:06:07.214466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:30:59.349 [2024-12-09 12:06:07.214474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:30:59.349 [2024-12-09 12:06:07.214482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.349 [2024-12-09 12:06:07.214489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c07c20 00:30:59.349 [2024-12-09 12:06:07.214508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c07c20 (9): Bad file descriptor 00:30:59.349 [2024-12-09 12:06:07.214519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:59.349 [2024-12-09 12:06:07.214527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:59.349 [2024-12-09 12:06:07.214536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:59.349 [2024-12-09 12:06:07.214546] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:59.349 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:59.349 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.349 12:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:00.735 12:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 263846 00:31:00.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (263846) - No such process 00:31:00.735 12:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:00.735 12:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:00.735 12:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:00.735 12:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:00.735 12:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:31:00.735 12:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:31:00.735 12:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:00.735 12:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:00.735 { 00:31:00.735 "params": { 00:31:00.735 "name": "Nvme$subsystem", 00:31:00.735 "trtype": "$TEST_TRANSPORT", 00:31:00.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:00.735 "adrfam": "ipv4", 00:31:00.735 "trsvcid": "$NVMF_PORT", 00:31:00.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:00.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:00.735 "hdgst": ${hdgst:-false}, 00:31:00.735 "ddgst": ${ddgst:-false} 00:31:00.735 }, 00:31:00.735 "method": "bdev_nvme_attach_controller" 00:31:00.735 } 00:31:00.735 EOF 00:31:00.735 )") 00:31:00.735 12:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:31:00.735 12:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:31:00.735 12:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:31:00.735 12:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:31:00.735 "params": { 00:31:00.735 "name": "Nvme0", 00:31:00.735 "trtype": "tcp", 00:31:00.735 "traddr": "10.0.0.2", 00:31:00.735 "adrfam": "ipv4", 00:31:00.735 "trsvcid": "4420", 00:31:00.735 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:00.735 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:00.735 "hdgst": false, 00:31:00.735 "ddgst": false 00:31:00.735 }, 00:31:00.735 "method": "bdev_nvme_attach_controller" 00:31:00.735 }' 00:31:00.735 [2024-12-09 12:06:08.285333] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:31:00.735 [2024-12-09 12:06:08.285388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid264195 ] 00:31:00.735 [2024-12-09 12:06:08.374922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.735 [2024-12-09 12:06:08.409735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.735 Running I/O for 1 seconds... 00:31:02.119 1956.00 IOPS, 122.25 MiB/s 00:31:02.119 Latency(us) 00:31:02.119 [2024-12-09T11:06:10.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.119 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:02.119 Verification LBA range: start 0x0 length 0x400 00:31:02.119 Nvme0n1 : 1.01 1992.80 124.55 0.00 0.00 31399.23 3399.68 36918.61 00:31:02.119 [2024-12-09T11:06:10.005Z] =================================================================================================================== 00:31:02.119 [2024-12-09T11:06:10.005Z] Total : 1992.80 124.55 0.00 0.00 31399.23 3399.68 36918.61 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@122 -- # sync 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # set +e 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # for i in {1..20} 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:31:02.119 rmmod nvme_tcp 00:31:02.119 rmmod nvme_fabrics 00:31:02.119 rmmod nvme_keyring 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # set -e 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@130 -- # return 0 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 263563 ']' 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 263563 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 263563 ']' 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 263563 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 263563 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 263563' 00:31:02.119 killing process with pid 263563 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 263563 00:31:02.119 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 263563 00:31:02.119 [2024-12-09 12:06:09.981598] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:02.379 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:02.380 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:02.380 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:02.380 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # iptr 00:31:02.380 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:31:02.380 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:02.380 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:31:02.380 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:02.380 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # remove_spdk_ns 00:31:02.380 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.380 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:02.380 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.292 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:31:04.292 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:04.292 00:31:04.292 real 0m14.468s 00:31:04.292 user 0m18.824s 00:31:04.292 sys 0m7.453s 00:31:04.292 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:04.292 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:04.292 ************************************ 00:31:04.292 END TEST nvmf_host_management 00:31:04.292 ************************************ 00:31:04.292 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:04.292 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:04.292 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:04.292 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:04.554 ************************************ 00:31:04.554 START TEST nvmf_lvol 00:31:04.554 ************************************ 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:04.554 * Looking for test storage... 00:31:04.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:04.554 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:04.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.555 --rc genhtml_branch_coverage=1 00:31:04.555 --rc genhtml_function_coverage=1 00:31:04.555 --rc genhtml_legend=1 00:31:04.555 --rc geninfo_all_blocks=1 00:31:04.555 --rc geninfo_unexecuted_blocks=1 00:31:04.555 00:31:04.555 ' 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:04.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.555 --rc genhtml_branch_coverage=1 00:31:04.555 --rc genhtml_function_coverage=1 00:31:04.555 --rc genhtml_legend=1 00:31:04.555 --rc geninfo_all_blocks=1 00:31:04.555 --rc geninfo_unexecuted_blocks=1 00:31:04.555 00:31:04.555 ' 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:04.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.555 --rc genhtml_branch_coverage=1 00:31:04.555 --rc genhtml_function_coverage=1 00:31:04.555 --rc genhtml_legend=1 00:31:04.555 --rc geninfo_all_blocks=1 00:31:04.555 --rc geninfo_unexecuted_blocks=1 00:31:04.555 00:31:04.555 ' 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:04.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.555 --rc genhtml_branch_coverage=1 00:31:04.555 --rc genhtml_function_coverage=1 00:31:04.555 --rc genhtml_legend=1 00:31:04.555 --rc geninfo_all_blocks=1 00:31:04.555 --rc geninfo_unexecuted_blocks=1 00:31:04.555 00:31:04.555 ' 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # : 0 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # '[' 1 -eq 1 ']' 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@35 -- # NVMF_APP+=(--interrupt-mode) 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@56 -- # have_pci_nics=0 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@310 -- # xtrace_disable 00:31:04.555 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_devs=() 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_devs 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_net_devs=() 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@318 -- # pci_drivers=() 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@318 -- # local -A pci_drivers 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # net_devs=() 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga net_devs 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # e810=() 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga e810 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # x722=() 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga x722 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@323 -- # mlx=() 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@323 -- # local -ga mlx 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:12.700 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:12.700 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:12.700 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:12.700 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:12.700 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:31:12.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:12.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:31:12.701 00:31:12.701 --- 10.0.0.2 ping statistics --- 00:31:12.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.701 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:12.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:12.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:31:12.701 00:31:12.701 --- 10.0.0.1 ping statistics --- 00:31:12.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.701 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=269148 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 269148 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 269148 ']' 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:12.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:12.701 12:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:12.701 [2024-12-09 12:06:19.906742] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:12.701 [2024-12-09 12:06:19.907908] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:31:12.701 [2024-12-09 12:06:19.907963] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:12.701 [2024-12-09 12:06:20.007826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:12.701 [2024-12-09 12:06:20.065238] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:12.701 [2024-12-09 12:06:20.065298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:12.701 [2024-12-09 12:06:20.065306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:12.701 [2024-12-09 12:06:20.065314] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:12.701 [2024-12-09 12:06:20.065320] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:12.701 [2024-12-09 12:06:20.067205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:12.701 [2024-12-09 12:06:20.067338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:12.701 [2024-12-09 12:06:20.067342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.701 [2024-12-09 12:06:20.150905] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:12.701 [2024-12-09 12:06:20.151659] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:12.701 [2024-12-09 12:06:20.151925] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:12.701 [2024-12-09 12:06:20.152056] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:12.962 12:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:12.962 12:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:31:12.962 12:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:12.962 12:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:12.962 12:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:12.962 12:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:12.962 12:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:13.222 [2024-12-09 12:06:20.912351] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:13.223 12:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:13.483 12:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:13.483 12:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:13.483 12:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:13.483 12:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:13.744 12:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:14.005 12:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=62c84487-1fb2-4943-8105-ec3bc59b050c 00:31:14.005 12:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 62c84487-1fb2-4943-8105-ec3bc59b050c lvol 20 00:31:14.266 12:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e689b0e4-6ed5-48b3-b591-8f6b5b6b1db8 00:31:14.266 12:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:14.266 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e689b0e4-6ed5-48b3-b591-8f6b5b6b1db8 00:31:14.527 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:14.787 [2024-12-09 12:06:22.464136] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.787 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:14.787 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=269689 00:31:14.787 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:14.787 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:16.171 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e689b0e4-6ed5-48b3-b591-8f6b5b6b1db8 MY_SNAPSHOT 00:31:16.171 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6a4854e4-3e5d-496c-bd54-04fec4855a3a 00:31:16.171 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e689b0e4-6ed5-48b3-b591-8f6b5b6b1db8 30 00:31:16.431 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6a4854e4-3e5d-496c-bd54-04fec4855a3a MY_CLONE 00:31:16.692 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=7e36d95b-815a-4dad-bd88-9098dc6f285d 00:31:16.692 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 7e36d95b-815a-4dad-bd88-9098dc6f285d 00:31:16.952 12:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 269689 00:31:26.950 Initializing NVMe Controllers 00:31:26.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:26.950 Controller IO queue size 128, less than required. 00:31:26.950 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:26.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:26.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:26.950 Initialization complete. Launching workers. 00:31:26.950 ======================================================== 00:31:26.950 Latency(us) 00:31:26.950 Device Information : IOPS MiB/s Average min max 00:31:26.950 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15248.30 59.56 8395.30 1576.09 52730.78 00:31:26.950 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15158.40 59.21 8447.20 3700.41 62644.46 00:31:26.950 ======================================================== 00:31:26.950 Total : 30406.70 118.78 8421.17 1576.09 62644.46 00:31:26.950 00:31:26.950 12:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e689b0e4-6ed5-48b3-b591-8f6b5b6b1db8 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 62c84487-1fb2-4943-8105-ec3bc59b050c 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@122 -- # sync 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # set +e 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # for i in {1..20} 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:31:26.950 rmmod nvme_tcp 00:31:26.950 rmmod nvme_fabrics 00:31:26.950 rmmod nvme_keyring 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # set -e 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@130 -- # return 0 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 269148 ']' 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 269148 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 269148 ']' 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 269148 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 269148 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 269148' 00:31:26.950 killing process with pid 269148 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 269148 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 269148 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # iptr 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:31:26.950 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:26.951 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # remove_spdk_ns 00:31:26.951 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:26.951 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:26.951 12:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.334 12:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:31:28.334 00:31:28.334 real 0m23.651s 00:31:28.334 user 0m55.779s 00:31:28.334 sys 0m10.494s 00:31:28.334 12:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:28.334 12:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:28.334 ************************************ 00:31:28.334 END TEST nvmf_lvol 00:31:28.334 ************************************ 00:31:28.334 12:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:28.334 12:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:28.334 12:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:28.334 12:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:28.334 ************************************ 00:31:28.334 START TEST nvmf_lvs_grow 00:31:28.334 ************************************ 00:31:28.334 12:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:28.334 * Looking for test storage... 00:31:28.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:28.334 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:28.334 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:31:28.334 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:28.334 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:28.334 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:28.334 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:28.334 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:28.334 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:28.334 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:28.334 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:28.334 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:28.334 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:28.334 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:28.334 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:28.334 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:28.334 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:28.334 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:28.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.335 --rc genhtml_branch_coverage=1 00:31:28.335 --rc genhtml_function_coverage=1 00:31:28.335 --rc genhtml_legend=1 00:31:28.335 --rc geninfo_all_blocks=1 00:31:28.335 --rc geninfo_unexecuted_blocks=1 00:31:28.335 00:31:28.335 ' 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:28.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.335 --rc genhtml_branch_coverage=1 00:31:28.335 --rc genhtml_function_coverage=1 00:31:28.335 --rc genhtml_legend=1 00:31:28.335 --rc geninfo_all_blocks=1 00:31:28.335 --rc geninfo_unexecuted_blocks=1 00:31:28.335 00:31:28.335 ' 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:28.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.335 --rc genhtml_branch_coverage=1 00:31:28.335 --rc genhtml_function_coverage=1 00:31:28.335 --rc genhtml_legend=1 00:31:28.335 --rc geninfo_all_blocks=1 00:31:28.335 --rc geninfo_unexecuted_blocks=1 00:31:28.335 00:31:28.335 ' 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:28.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.335 --rc genhtml_branch_coverage=1 00:31:28.335 --rc genhtml_function_coverage=1 00:31:28.335 --rc genhtml_legend=1 00:31:28.335 --rc geninfo_all_blocks=1 00:31:28.335 --rc geninfo_unexecuted_blocks=1 00:31:28.335 00:31:28.335 ' 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # : 0 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # '[' 1 -eq 1 ']' 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@35 -- # NVMF_APP+=(--interrupt-mode) 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@56 -- # have_pci_nics=0 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:28.335 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@310 -- # xtrace_disable 00:31:28.336 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_devs=() 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_devs 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_net_devs=() 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@318 -- # pci_drivers=() 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@318 -- # local -A pci_drivers 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # net_devs=() 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga net_devs 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # e810=() 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga e810 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # x722=() 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga x722 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@323 -- # mlx=() 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@323 -- # local -ga mlx 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:36.477 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:36.477 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:36.477 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:36.477 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:36.477 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:31:36.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:36.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:31:36.478 00:31:36.478 --- 10.0.0.2 ping statistics --- 00:31:36.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.478 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:36.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:36.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:31:36.478 00:31:36.478 --- 10.0.0.1 ping statistics --- 00:31:36.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.478 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=276037 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 276037 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 276037 ']' 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:36.478 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:36.478 [2024-12-09 12:06:43.683919] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:36.478 [2024-12-09 12:06:43.685033] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:31:36.478 [2024-12-09 12:06:43.685084] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.478 [2024-12-09 12:06:43.783816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.478 [2024-12-09 12:06:43.834759] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:36.478 [2024-12-09 12:06:43.834813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:36.478 [2024-12-09 12:06:43.834821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:36.478 [2024-12-09 12:06:43.834829] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:36.478 [2024-12-09 12:06:43.834835] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:36.478 [2024-12-09 12:06:43.835582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.478 [2024-12-09 12:06:43.912764] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:36.478 [2024-12-09 12:06:43.913058] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:36.739 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:36.739 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:31:36.739 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:36.739 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:36.739 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:36.739 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:36.739 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:36.999 [2024-12-09 12:06:44.720221] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:36.999 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:36.999 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:36.999 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:36.999 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:36.999 ************************************ 00:31:36.999 START TEST lvs_grow_clean 00:31:36.999 ************************************ 00:31:36.999 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:31:36.999 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:36.999 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:36.999 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:36.999 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:36.999 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:36.999 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:36.999 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:36.999 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:36.999 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:37.260 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:37.260 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:37.520 12:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5510eeff-fb14-4f2e-86b4-07bcf1fe1489 00:31:37.520 12:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5510eeff-fb14-4f2e-86b4-07bcf1fe1489 00:31:37.520 12:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:37.520 12:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:37.520 12:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:37.520 12:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5510eeff-fb14-4f2e-86b4-07bcf1fe1489 lvol 150 00:31:37.780 12:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=877986a4-edfb-4dde-bcf9-b02b42ca2bb8 00:31:37.780 12:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:37.780 12:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:38.041 [2024-12-09 12:06:45.712136] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:38.041 [2024-12-09 12:06:45.712302] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:38.041 true 00:31:38.041 12:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5510eeff-fb14-4f2e-86b4-07bcf1fe1489 00:31:38.041 12:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:38.041 12:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:38.041 12:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:38.301 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 877986a4-edfb-4dde-bcf9-b02b42ca2bb8 00:31:38.562 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:38.563 [2024-12-09 12:06:46.412817] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:38.563 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:38.823 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=276552 00:31:38.823 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:38.823 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:38.824 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 276552 /var/tmp/bdevperf.sock 00:31:38.824 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 276552 ']' 00:31:38.824 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:38.824 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:38.824 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:38.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:38.824 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:38.824 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:38.824 [2024-12-09 12:06:46.652149] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:31:38.824 [2024-12-09 12:06:46.652224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid276552 ] 00:31:39.085 [2024-12-09 12:06:46.744087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.085 [2024-12-09 12:06:46.796961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:39.657 12:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:39.657 12:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:31:39.657 12:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:40.228 Nvme0n1 00:31:40.228 12:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:40.228 [ 00:31:40.228 { 00:31:40.228 "name": "Nvme0n1", 00:31:40.228 "aliases": [ 00:31:40.228 "877986a4-edfb-4dde-bcf9-b02b42ca2bb8" 00:31:40.228 ], 00:31:40.228 "product_name": "NVMe disk", 00:31:40.228 "block_size": 4096, 00:31:40.228 "num_blocks": 38912, 00:31:40.228 "uuid": "877986a4-edfb-4dde-bcf9-b02b42ca2bb8", 00:31:40.228 "numa_id": 0, 00:31:40.228 "assigned_rate_limits": { 00:31:40.228 "rw_ios_per_sec": 0, 00:31:40.228 "rw_mbytes_per_sec": 0, 00:31:40.228 "r_mbytes_per_sec": 0, 00:31:40.228 "w_mbytes_per_sec": 0 00:31:40.228 }, 00:31:40.228 "claimed": false, 00:31:40.228 "zoned": false, 00:31:40.228 "supported_io_types": { 00:31:40.228 "read": true, 00:31:40.228 "write": true, 00:31:40.228 "unmap": true, 00:31:40.228 "flush": true, 00:31:40.228 "reset": true, 00:31:40.228 "nvme_admin": true, 00:31:40.228 "nvme_io": true, 00:31:40.228 "nvme_io_md": false, 00:31:40.228 "write_zeroes": true, 00:31:40.228 "zcopy": false, 00:31:40.228 "get_zone_info": false, 00:31:40.228 "zone_management": false, 00:31:40.228 "zone_append": false, 00:31:40.228 "compare": true, 00:31:40.228 "compare_and_write": true, 00:31:40.228 "abort": true, 00:31:40.228 "seek_hole": false, 00:31:40.228 "seek_data": false, 00:31:40.228 "copy": true, 00:31:40.228 "nvme_iov_md": false 00:31:40.228 }, 00:31:40.228 "memory_domains": [ 00:31:40.228 { 00:31:40.228 "dma_device_id": "system", 00:31:40.228 "dma_device_type": 1 00:31:40.228 } 00:31:40.228 ], 00:31:40.228 "driver_specific": { 00:31:40.228 "nvme": [ 00:31:40.228 { 00:31:40.228 "trid": { 00:31:40.228 "trtype": "TCP", 00:31:40.228 "adrfam": "IPv4", 00:31:40.228 "traddr": "10.0.0.2", 00:31:40.228 "trsvcid": "4420", 00:31:40.228 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:40.228 }, 00:31:40.228 "ctrlr_data": { 00:31:40.228 "cntlid": 1, 00:31:40.228 "vendor_id": "0x8086", 00:31:40.228 "model_number": "SPDK bdev Controller", 00:31:40.228 "serial_number": "SPDK0", 00:31:40.228 "firmware_revision": "25.01", 00:31:40.228 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:40.228 "oacs": { 00:31:40.228 "security": 0, 00:31:40.228 "format": 0, 00:31:40.228 "firmware": 0, 00:31:40.228 "ns_manage": 0 00:31:40.228 }, 00:31:40.228 "multi_ctrlr": true, 00:31:40.228 "ana_reporting": false 00:31:40.228 }, 00:31:40.228 "vs": { 00:31:40.228 "nvme_version": "1.3" 00:31:40.228 }, 00:31:40.228 "ns_data": { 00:31:40.228 "id": 1, 00:31:40.229 "can_share": true 00:31:40.229 } 00:31:40.229 } 00:31:40.229 ], 00:31:40.229 "mp_policy": "active_passive" 00:31:40.229 } 00:31:40.229 } 00:31:40.229 ] 00:31:40.229 12:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:40.229 12:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=276761 00:31:40.229 12:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:40.229 Running I/O for 10 seconds... 00:31:41.614 Latency(us) 00:31:41.614 [2024-12-09T11:06:49.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:41.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:41.614 Nvme0n1 : 1.00 16891.00 65.98 0.00 0.00 0.00 0.00 0.00 00:31:41.614 [2024-12-09T11:06:49.500Z] =================================================================================================================== 00:31:41.614 [2024-12-09T11:06:49.500Z] Total : 16891.00 65.98 0.00 0.00 0.00 0.00 0.00 00:31:41.614 00:31:42.186 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5510eeff-fb14-4f2e-86b4-07bcf1fe1489 00:31:42.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:42.447 Nvme0n1 : 2.00 17145.00 66.97 0.00 0.00 0.00 0.00 0.00 00:31:42.447 [2024-12-09T11:06:50.333Z] =================================================================================================================== 00:31:42.447 [2024-12-09T11:06:50.333Z] Total : 17145.00 66.97 0.00 0.00 0.00 0.00 0.00 00:31:42.447 00:31:42.447 true 00:31:42.447 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5510eeff-fb14-4f2e-86b4-07bcf1fe1489 00:31:42.447 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:42.708 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:42.708 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:42.708 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 276761 00:31:43.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:43.280 Nvme0n1 : 3.00 17314.33 67.63 0.00 0.00 0.00 0.00 0.00 00:31:43.280 [2024-12-09T11:06:51.166Z] =================================================================================================================== 00:31:43.280 [2024-12-09T11:06:51.166Z] Total : 17314.33 67.63 0.00 0.00 0.00 0.00 0.00 00:31:43.280 00:31:44.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:44.666 Nvme0n1 : 4.00 17526.00 68.46 0.00 0.00 0.00 0.00 0.00 00:31:44.666 [2024-12-09T11:06:52.552Z] =================================================================================================================== 00:31:44.666 [2024-12-09T11:06:52.552Z] Total : 17526.00 68.46 0.00 0.00 0.00 0.00 0.00 00:31:44.666 00:31:45.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:45.236 Nvme0n1 : 5.00 18034.00 70.45 0.00 0.00 0.00 0.00 0.00 00:31:45.236 [2024-12-09T11:06:53.122Z] =================================================================================================================== 00:31:45.236 [2024-12-09T11:06:53.122Z] Total : 18034.00 70.45 0.00 0.00 0.00 0.00 0.00 00:31:45.236 00:31:46.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:46.620 Nvme0n1 : 6.00 19304.00 75.41 0.00 0.00 0.00 0.00 0.00 00:31:46.620 [2024-12-09T11:06:54.506Z] =================================================================================================================== 00:31:46.620 [2024-12-09T11:06:54.506Z] Total : 19304.00 75.41 0.00 0.00 0.00 0.00 0.00 00:31:46.620 00:31:47.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:47.562 Nvme0n1 : 7.00 20211.14 78.95 0.00 0.00 0.00 0.00 0.00 00:31:47.562 [2024-12-09T11:06:55.448Z] =================================================================================================================== 00:31:47.562 [2024-12-09T11:06:55.448Z] Total : 20211.14 78.95 0.00 0.00 0.00 0.00 0.00 00:31:47.562 00:31:48.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:48.505 Nvme0n1 : 8.00 20891.75 81.61 0.00 0.00 0.00 0.00 0.00 00:31:48.505 [2024-12-09T11:06:56.391Z] =================================================================================================================== 00:31:48.505 [2024-12-09T11:06:56.391Z] Total : 20891.75 81.61 0.00 0.00 0.00 0.00 0.00 00:31:48.505 00:31:49.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:49.447 Nvme0n1 : 9.00 21420.89 83.68 0.00 0.00 0.00 0.00 0.00 00:31:49.447 [2024-12-09T11:06:57.333Z] =================================================================================================================== 00:31:49.447 [2024-12-09T11:06:57.333Z] Total : 21420.89 83.68 0.00 0.00 0.00 0.00 0.00 00:31:49.447 00:31:50.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:50.390 Nvme0n1 : 10.00 21844.40 85.33 0.00 0.00 0.00 0.00 0.00 00:31:50.390 [2024-12-09T11:06:58.276Z] =================================================================================================================== 00:31:50.390 [2024-12-09T11:06:58.276Z] Total : 21844.40 85.33 0.00 0.00 0.00 0.00 0.00 00:31:50.390 00:31:50.390 00:31:50.390 Latency(us) 00:31:50.390 [2024-12-09T11:06:58.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:50.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:50.390 Nvme0n1 : 10.00 21844.29 85.33 0.00 0.00 5856.73 3249.49 32331.09 00:31:50.390 [2024-12-09T11:06:58.276Z] =================================================================================================================== 00:31:50.390 [2024-12-09T11:06:58.276Z] Total : 21844.29 85.33 0.00 0.00 5856.73 3249.49 32331.09 00:31:50.390 { 00:31:50.390 "results": [ 00:31:50.390 { 00:31:50.390 "job": "Nvme0n1", 00:31:50.390 "core_mask": "0x2", 00:31:50.390 "workload": "randwrite", 00:31:50.390 "status": "finished", 00:31:50.390 "queue_depth": 128, 00:31:50.390 "io_size": 4096, 00:31:50.390 "runtime": 10.00298, 00:31:50.390 "iops": 21844.290401460366, 00:31:50.390 "mibps": 85.32925938070456, 00:31:50.390 "io_failed": 0, 00:31:50.390 "io_timeout": 0, 00:31:50.390 "avg_latency_us": 5856.726263569297, 00:31:50.390 "min_latency_us": 3249.4933333333333, 00:31:50.390 "max_latency_us": 32331.093333333334 00:31:50.390 } 00:31:50.390 ], 00:31:50.390 "core_count": 1 00:31:50.390 } 00:31:50.390 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 276552 00:31:50.390 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 276552 ']' 00:31:50.390 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 276552 00:31:50.390 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:31:50.390 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:50.390 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 276552 00:31:50.390 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:50.390 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:50.390 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 276552' 00:31:50.390 killing process with pid 276552 00:31:50.390 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 276552 00:31:50.390 Received shutdown signal, test time was about 10.000000 seconds 00:31:50.390 00:31:50.390 Latency(us) 00:31:50.390 [2024-12-09T11:06:58.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:50.390 [2024-12-09T11:06:58.276Z] =================================================================================================================== 00:31:50.390 [2024-12-09T11:06:58.276Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:50.390 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 276552 00:31:50.651 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:50.651 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:50.911 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5510eeff-fb14-4f2e-86b4-07bcf1fe1489 00:31:50.911 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:51.172 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:51.172 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:31:51.172 12:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:51.172 [2024-12-09 12:06:59.000193] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:51.172 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5510eeff-fb14-4f2e-86b4-07bcf1fe1489 00:31:51.172 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:31:51.172 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5510eeff-fb14-4f2e-86b4-07bcf1fe1489 00:31:51.172 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:51.172 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:51.172 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:51.434 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:51.434 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:51.434 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:51.434 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:51.434 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:51.434 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5510eeff-fb14-4f2e-86b4-07bcf1fe1489 00:31:51.434 request: 00:31:51.434 { 00:31:51.434 "uuid": "5510eeff-fb14-4f2e-86b4-07bcf1fe1489", 00:31:51.434 "method": "bdev_lvol_get_lvstores", 00:31:51.434 "req_id": 1 00:31:51.434 } 00:31:51.434 Got JSON-RPC error response 00:31:51.434 response: 00:31:51.434 { 00:31:51.434 "code": -19, 00:31:51.434 "message": "No such device" 00:31:51.434 } 00:31:51.434 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:31:51.434 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:51.434 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:51.434 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:51.435 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:51.696 aio_bdev 00:31:51.696 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 877986a4-edfb-4dde-bcf9-b02b42ca2bb8 00:31:51.696 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=877986a4-edfb-4dde-bcf9-b02b42ca2bb8 00:31:51.696 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:51.696 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:31:51.696 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:51.696 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:51.696 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:51.957 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 877986a4-edfb-4dde-bcf9-b02b42ca2bb8 -t 2000 00:31:51.957 [ 00:31:51.957 { 00:31:51.957 "name": "877986a4-edfb-4dde-bcf9-b02b42ca2bb8", 00:31:51.957 "aliases": [ 00:31:51.957 "lvs/lvol" 00:31:51.957 ], 00:31:51.957 "product_name": "Logical Volume", 00:31:51.957 "block_size": 4096, 00:31:51.957 "num_blocks": 38912, 00:31:51.957 "uuid": "877986a4-edfb-4dde-bcf9-b02b42ca2bb8", 00:31:51.957 "assigned_rate_limits": { 00:31:51.957 "rw_ios_per_sec": 0, 00:31:51.957 "rw_mbytes_per_sec": 0, 00:31:51.957 "r_mbytes_per_sec": 0, 00:31:51.957 "w_mbytes_per_sec": 0 00:31:51.957 }, 00:31:51.957 "claimed": false, 00:31:51.957 "zoned": false, 00:31:51.957 "supported_io_types": { 00:31:51.957 "read": true, 00:31:51.957 "write": true, 00:31:51.957 "unmap": true, 00:31:51.957 "flush": false, 00:31:51.957 "reset": true, 00:31:51.957 "nvme_admin": false, 00:31:51.957 "nvme_io": false, 00:31:51.957 "nvme_io_md": false, 00:31:51.957 "write_zeroes": true, 00:31:51.957 "zcopy": false, 00:31:51.957 "get_zone_info": false, 00:31:51.957 "zone_management": false, 00:31:51.957 "zone_append": false, 00:31:51.957 "compare": false, 00:31:51.957 "compare_and_write": false, 00:31:51.957 "abort": false, 00:31:51.957 "seek_hole": true, 00:31:51.957 "seek_data": true, 00:31:51.957 "copy": false, 00:31:51.957 "nvme_iov_md": false 00:31:51.957 }, 00:31:51.957 "driver_specific": { 00:31:51.957 "lvol": { 00:31:51.957 "lvol_store_uuid": "5510eeff-fb14-4f2e-86b4-07bcf1fe1489", 00:31:51.957 "base_bdev": "aio_bdev", 00:31:51.957 "thin_provision": false, 00:31:51.957 "num_allocated_clusters": 38, 00:31:51.957 "snapshot": false, 00:31:51.957 "clone": false, 00:31:51.957 "esnap_clone": false 00:31:51.957 } 00:31:51.957 } 00:31:51.957 } 00:31:51.957 ] 00:31:51.957 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:31:51.957 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5510eeff-fb14-4f2e-86b4-07bcf1fe1489 00:31:51.957 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:52.218 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:52.218 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5510eeff-fb14-4f2e-86b4-07bcf1fe1489 00:31:52.218 12:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:52.478 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:52.478 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 877986a4-edfb-4dde-bcf9-b02b42ca2bb8 00:31:52.478 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5510eeff-fb14-4f2e-86b4-07bcf1fe1489 00:31:52.739 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:52.999 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:52.999 00:31:52.999 real 0m15.917s 00:31:52.999 user 0m15.608s 00:31:52.999 sys 0m1.423s 00:31:52.999 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:52.999 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:52.999 ************************************ 00:31:52.999 END TEST lvs_grow_clean 00:31:52.999 ************************************ 00:31:52.999 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:31:52.999 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:52.999 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:53.000 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:53.000 ************************************ 00:31:53.000 START TEST lvs_grow_dirty 00:31:53.000 ************************************ 00:31:53.000 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:31:53.000 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:53.000 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:53.000 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:53.000 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:53.000 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:53.000 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:53.000 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:53.000 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:53.000 12:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:53.260 12:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:53.260 12:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:53.521 12:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0d145a7a-154f-4f0e-9f4e-e2163ea3d474 00:31:53.521 12:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d145a7a-154f-4f0e-9f4e-e2163ea3d474 00:31:53.521 12:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:53.521 12:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:53.521 12:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:53.521 12:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0d145a7a-154f-4f0e-9f4e-e2163ea3d474 lvol 150 00:31:53.781 12:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=9fce216d-5bc5-4b56-98b3-2384520e4ff6 00:31:53.781 12:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:53.781 12:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:54.042 [2024-12-09 12:07:01.696112] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:54.042 [2024-12-09 12:07:01.696258] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:54.042 true 00:31:54.042 12:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d145a7a-154f-4f0e-9f4e-e2163ea3d474 00:31:54.042 12:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:54.042 12:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:54.042 12:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:54.302 12:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9fce216d-5bc5-4b56-98b3-2384520e4ff6 00:31:54.563 12:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:54.563 [2024-12-09 12:07:02.348621] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:54.563 12:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:54.824 12:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=279518 00:31:54.824 12:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:54.824 12:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:54.824 12:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 279518 /var/tmp/bdevperf.sock 00:31:54.824 12:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 279518 ']' 00:31:54.824 12:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:54.824 12:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:54.824 12:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:54.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:54.824 12:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:54.824 12:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:54.824 [2024-12-09 12:07:02.582818] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:31:54.824 [2024-12-09 12:07:02.582882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid279518 ] 00:31:54.824 [2024-12-09 12:07:02.669425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.824 [2024-12-09 12:07:02.706807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.767 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:55.767 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:55.767 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:56.028 Nvme0n1 00:31:56.028 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:56.028 [ 00:31:56.028 { 00:31:56.028 "name": "Nvme0n1", 00:31:56.028 "aliases": [ 00:31:56.028 "9fce216d-5bc5-4b56-98b3-2384520e4ff6" 00:31:56.028 ], 00:31:56.028 "product_name": "NVMe disk", 00:31:56.028 "block_size": 4096, 00:31:56.028 "num_blocks": 38912, 00:31:56.028 "uuid": "9fce216d-5bc5-4b56-98b3-2384520e4ff6", 00:31:56.028 "numa_id": 0, 00:31:56.028 "assigned_rate_limits": { 00:31:56.028 "rw_ios_per_sec": 0, 00:31:56.028 "rw_mbytes_per_sec": 0, 00:31:56.028 "r_mbytes_per_sec": 0, 00:31:56.028 "w_mbytes_per_sec": 0 00:31:56.028 }, 00:31:56.028 "claimed": false, 00:31:56.028 "zoned": false, 00:31:56.028 "supported_io_types": { 00:31:56.028 "read": true, 00:31:56.028 "write": true, 00:31:56.028 "unmap": true, 00:31:56.028 "flush": true, 00:31:56.028 "reset": true, 00:31:56.028 "nvme_admin": true, 00:31:56.028 "nvme_io": true, 00:31:56.028 "nvme_io_md": false, 00:31:56.028 "write_zeroes": true, 00:31:56.028 "zcopy": false, 00:31:56.028 "get_zone_info": false, 00:31:56.028 "zone_management": false, 00:31:56.028 "zone_append": false, 00:31:56.028 "compare": true, 00:31:56.028 "compare_and_write": true, 00:31:56.028 "abort": true, 00:31:56.028 "seek_hole": false, 00:31:56.028 "seek_data": false, 00:31:56.028 "copy": true, 00:31:56.028 "nvme_iov_md": false 00:31:56.028 }, 00:31:56.028 "memory_domains": [ 00:31:56.028 { 00:31:56.028 "dma_device_id": "system", 00:31:56.028 "dma_device_type": 1 00:31:56.028 } 00:31:56.028 ], 00:31:56.028 "driver_specific": { 00:31:56.028 "nvme": [ 00:31:56.028 { 00:31:56.028 "trid": { 00:31:56.028 "trtype": "TCP", 00:31:56.028 "adrfam": "IPv4", 00:31:56.028 "traddr": "10.0.0.2", 00:31:56.028 "trsvcid": "4420", 00:31:56.028 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:56.028 }, 00:31:56.028 "ctrlr_data": { 00:31:56.028 "cntlid": 1, 00:31:56.028 "vendor_id": "0x8086", 00:31:56.028 "model_number": "SPDK bdev Controller", 00:31:56.028 "serial_number": "SPDK0", 00:31:56.028 "firmware_revision": "25.01", 00:31:56.028 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:56.028 "oacs": { 00:31:56.028 "security": 0, 00:31:56.028 "format": 0, 00:31:56.028 "firmware": 0, 00:31:56.028 "ns_manage": 0 00:31:56.028 }, 00:31:56.028 "multi_ctrlr": true, 00:31:56.028 "ana_reporting": false 00:31:56.028 }, 00:31:56.028 "vs": { 00:31:56.028 "nvme_version": "1.3" 00:31:56.028 }, 00:31:56.028 "ns_data": { 00:31:56.028 "id": 1, 00:31:56.028 "can_share": true 00:31:56.028 } 00:31:56.028 } 00:31:56.029 ], 00:31:56.029 "mp_policy": "active_passive" 00:31:56.029 } 00:31:56.029 } 00:31:56.029 ] 00:31:56.289 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=279832 00:31:56.289 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:56.289 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:56.289 Running I/O for 10 seconds... 00:31:57.239 Latency(us) 00:31:57.239 [2024-12-09T11:07:05.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:57.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:57.239 Nvme0n1 : 1.00 17526.00 68.46 0.00 0.00 0.00 0.00 0.00 00:31:57.239 [2024-12-09T11:07:05.125Z] =================================================================================================================== 00:31:57.239 [2024-12-09T11:07:05.125Z] Total : 17526.00 68.46 0.00 0.00 0.00 0.00 0.00 00:31:57.239 00:31:58.180 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0d145a7a-154f-4f0e-9f4e-e2163ea3d474 00:31:58.180 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:58.180 Nvme0n1 : 2.00 17843.50 69.70 0.00 0.00 0.00 0.00 0.00 00:31:58.180 [2024-12-09T11:07:06.066Z] =================================================================================================================== 00:31:58.180 [2024-12-09T11:07:06.066Z] Total : 17843.50 69.70 0.00 0.00 0.00 0.00 0.00 00:31:58.180 00:31:58.442 true 00:31:58.442 12:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d145a7a-154f-4f0e-9f4e-e2163ea3d474 00:31:58.442 12:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:58.442 12:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:58.442 12:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:58.442 12:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 279832 00:31:59.385 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:59.385 Nvme0n1 : 3.00 17928.33 70.03 0.00 0.00 0.00 0.00 0.00 00:31:59.385 [2024-12-09T11:07:07.271Z] =================================================================================================================== 00:31:59.385 [2024-12-09T11:07:07.271Z] Total : 17928.33 70.03 0.00 0.00 0.00 0.00 0.00 00:31:59.385 00:32:00.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:00.328 Nvme0n1 : 4.00 18034.00 70.45 0.00 0.00 0.00 0.00 0.00 00:32:00.328 [2024-12-09T11:07:08.214Z] =================================================================================================================== 00:32:00.328 [2024-12-09T11:07:08.214Z] Total : 18034.00 70.45 0.00 0.00 0.00 0.00 0.00 00:32:00.328 00:32:01.270 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:01.270 Nvme0n1 : 5.00 19532.60 76.30 0.00 0.00 0.00 0.00 0.00 00:32:01.270 [2024-12-09T11:07:09.156Z] =================================================================================================================== 00:32:01.270 [2024-12-09T11:07:09.156Z] Total : 19532.60 76.30 0.00 0.00 0.00 0.00 0.00 00:32:01.270 00:32:02.214 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:02.214 Nvme0n1 : 6.00 20552.83 80.28 0.00 0.00 0.00 0.00 0.00 00:32:02.214 [2024-12-09T11:07:10.100Z] =================================================================================================================== 00:32:02.214 [2024-12-09T11:07:10.100Z] Total : 20552.83 80.28 0.00 0.00 0.00 0.00 0.00 00:32:02.214 00:32:03.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:03.158 Nvme0n1 : 7.00 21263.86 83.06 0.00 0.00 0.00 0.00 0.00 00:32:03.158 [2024-12-09T11:07:11.044Z] =================================================================================================================== 00:32:03.158 [2024-12-09T11:07:11.044Z] Total : 21263.86 83.06 0.00 0.00 0.00 0.00 0.00 00:32:03.158 00:32:04.179 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:04.179 Nvme0n1 : 8.00 21812.62 85.21 0.00 0.00 0.00 0.00 0.00 00:32:04.179 [2024-12-09T11:07:12.065Z] =================================================================================================================== 00:32:04.179 [2024-12-09T11:07:12.065Z] Total : 21812.62 85.21 0.00 0.00 0.00 0.00 0.00 00:32:04.179 00:32:05.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:05.211 Nvme0n1 : 9.00 22239.44 86.87 0.00 0.00 0.00 0.00 0.00 00:32:05.211 [2024-12-09T11:07:13.097Z] =================================================================================================================== 00:32:05.211 [2024-12-09T11:07:13.097Z] Total : 22239.44 86.87 0.00 0.00 0.00 0.00 0.00 00:32:05.211 00:32:06.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:06.229 Nvme0n1 : 10.00 22581.10 88.21 0.00 0.00 0.00 0.00 0.00 00:32:06.229 [2024-12-09T11:07:14.115Z] =================================================================================================================== 00:32:06.229 [2024-12-09T11:07:14.115Z] Total : 22581.10 88.21 0.00 0.00 0.00 0.00 0.00 00:32:06.229 00:32:06.229 00:32:06.229 Latency(us) 00:32:06.229 [2024-12-09T11:07:14.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:06.229 Nvme0n1 : 10.00 22579.27 88.20 0.00 0.00 5666.07 3044.69 32112.64 00:32:06.229 [2024-12-09T11:07:14.115Z] =================================================================================================================== 00:32:06.229 [2024-12-09T11:07:14.115Z] Total : 22579.27 88.20 0.00 0.00 5666.07 3044.69 32112.64 00:32:06.229 { 00:32:06.229 "results": [ 00:32:06.229 { 00:32:06.229 "job": "Nvme0n1", 00:32:06.229 "core_mask": "0x2", 00:32:06.229 "workload": "randwrite", 00:32:06.229 "status": "finished", 00:32:06.229 "queue_depth": 128, 00:32:06.229 "io_size": 4096, 00:32:06.229 "runtime": 10.003688, 00:32:06.229 "iops": 22579.27276420456, 00:32:06.229 "mibps": 88.20028423517407, 00:32:06.229 "io_failed": 0, 00:32:06.229 "io_timeout": 0, 00:32:06.229 "avg_latency_us": 5666.065089636201, 00:32:06.229 "min_latency_us": 3044.693333333333, 00:32:06.229 "max_latency_us": 32112.64 00:32:06.229 } 00:32:06.229 ], 00:32:06.229 "core_count": 1 00:32:06.229 } 00:32:06.229 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 279518 00:32:06.229 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 279518 ']' 00:32:06.229 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 279518 00:32:06.229 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:32:06.229 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:06.229 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 279518 00:32:06.490 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:06.490 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:06.490 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 279518' 00:32:06.490 killing process with pid 279518 00:32:06.490 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 279518 00:32:06.490 Received shutdown signal, test time was about 10.000000 seconds 00:32:06.490 00:32:06.490 Latency(us) 00:32:06.490 [2024-12-09T11:07:14.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.490 [2024-12-09T11:07:14.376Z] =================================================================================================================== 00:32:06.490 [2024-12-09T11:07:14.376Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:06.490 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 279518 00:32:06.490 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:06.751 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:06.751 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d145a7a-154f-4f0e-9f4e-e2163ea3d474 00:32:06.751 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:07.011 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:07.012 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:07.012 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 276037 00:32:07.012 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 276037 00:32:07.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 276037 Killed "${NVMF_APP[@]}" "$@" 00:32:07.012 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:07.012 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:07.012 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:07.012 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:07.012 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:07.012 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=281865 00:32:07.012 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:07.012 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 281865 00:32:07.012 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 281865 ']' 00:32:07.012 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.012 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:07.012 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.012 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:07.012 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:07.012 [2024-12-09 12:07:14.817355] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:07.012 [2024-12-09 12:07:14.818385] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:32:07.012 [2024-12-09 12:07:14.818432] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:07.273 [2024-12-09 12:07:14.909456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.273 [2024-12-09 12:07:14.940703] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:07.273 [2024-12-09 12:07:14.940736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:07.273 [2024-12-09 12:07:14.940741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:07.273 [2024-12-09 12:07:14.940746] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:07.273 [2024-12-09 12:07:14.940751] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:07.273 [2024-12-09 12:07:14.941202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.273 [2024-12-09 12:07:14.993303] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:07.273 [2024-12-09 12:07:14.993493] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:07.846 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:07.846 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:07.846 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:07.846 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:07.846 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:07.846 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:07.846 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:08.107 [2024-12-09 12:07:15.835329] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:08.107 [2024-12-09 12:07:15.835562] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:08.107 [2024-12-09 12:07:15.835668] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:08.107 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:08.107 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 9fce216d-5bc5-4b56-98b3-2384520e4ff6 00:32:08.107 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9fce216d-5bc5-4b56-98b3-2384520e4ff6 00:32:08.107 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:08.107 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:08.107 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:08.107 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:08.107 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:08.368 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9fce216d-5bc5-4b56-98b3-2384520e4ff6 -t 2000 00:32:08.368 [ 00:32:08.368 { 00:32:08.368 "name": "9fce216d-5bc5-4b56-98b3-2384520e4ff6", 00:32:08.368 "aliases": [ 00:32:08.368 "lvs/lvol" 00:32:08.368 ], 00:32:08.368 "product_name": "Logical Volume", 00:32:08.368 "block_size": 4096, 00:32:08.368 "num_blocks": 38912, 00:32:08.368 "uuid": "9fce216d-5bc5-4b56-98b3-2384520e4ff6", 00:32:08.368 "assigned_rate_limits": { 00:32:08.368 "rw_ios_per_sec": 0, 00:32:08.368 "rw_mbytes_per_sec": 0, 00:32:08.368 "r_mbytes_per_sec": 0, 00:32:08.368 "w_mbytes_per_sec": 0 00:32:08.368 }, 00:32:08.368 "claimed": false, 00:32:08.368 "zoned": false, 00:32:08.368 "supported_io_types": { 00:32:08.368 "read": true, 00:32:08.368 "write": true, 00:32:08.368 "unmap": true, 00:32:08.368 "flush": false, 00:32:08.368 "reset": true, 00:32:08.368 "nvme_admin": false, 00:32:08.368 "nvme_io": false, 00:32:08.368 "nvme_io_md": false, 00:32:08.368 "write_zeroes": true, 00:32:08.368 "zcopy": false, 00:32:08.368 "get_zone_info": false, 00:32:08.368 "zone_management": false, 00:32:08.368 "zone_append": false, 00:32:08.368 "compare": false, 00:32:08.368 "compare_and_write": false, 00:32:08.368 "abort": false, 00:32:08.368 "seek_hole": true, 00:32:08.368 "seek_data": true, 00:32:08.368 "copy": false, 00:32:08.368 "nvme_iov_md": false 00:32:08.368 }, 00:32:08.368 "driver_specific": { 00:32:08.369 "lvol": { 00:32:08.369 "lvol_store_uuid": "0d145a7a-154f-4f0e-9f4e-e2163ea3d474", 00:32:08.369 "base_bdev": "aio_bdev", 00:32:08.369 "thin_provision": false, 00:32:08.369 "num_allocated_clusters": 38, 00:32:08.369 "snapshot": false, 00:32:08.369 "clone": false, 00:32:08.369 "esnap_clone": false 00:32:08.369 } 00:32:08.369 } 00:32:08.369 } 00:32:08.369 ] 00:32:08.369 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:08.369 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d145a7a-154f-4f0e-9f4e-e2163ea3d474 00:32:08.369 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:08.630 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:08.630 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d145a7a-154f-4f0e-9f4e-e2163ea3d474 00:32:08.630 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:08.892 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:08.892 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:08.893 [2024-12-09 12:07:16.729767] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:08.893 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d145a7a-154f-4f0e-9f4e-e2163ea3d474 00:32:08.893 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:32:08.893 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d145a7a-154f-4f0e-9f4e-e2163ea3d474 00:32:08.893 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:08.893 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:08.893 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:08.893 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:08.893 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:08.893 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:08.893 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:08.893 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:08.893 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d145a7a-154f-4f0e-9f4e-e2163ea3d474 00:32:09.154 request: 00:32:09.154 { 00:32:09.154 "uuid": "0d145a7a-154f-4f0e-9f4e-e2163ea3d474", 00:32:09.154 "method": "bdev_lvol_get_lvstores", 00:32:09.154 "req_id": 1 00:32:09.154 } 00:32:09.154 Got JSON-RPC error response 00:32:09.154 response: 00:32:09.154 { 00:32:09.154 "code": -19, 00:32:09.154 "message": "No such device" 00:32:09.154 } 00:32:09.154 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:32:09.154 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:09.154 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:09.154 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:09.154 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:09.416 aio_bdev 00:32:09.416 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9fce216d-5bc5-4b56-98b3-2384520e4ff6 00:32:09.416 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9fce216d-5bc5-4b56-98b3-2384520e4ff6 00:32:09.416 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:09.416 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:09.416 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:09.416 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:09.416 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:09.677 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9fce216d-5bc5-4b56-98b3-2384520e4ff6 -t 2000 00:32:09.677 [ 00:32:09.677 { 00:32:09.677 "name": "9fce216d-5bc5-4b56-98b3-2384520e4ff6", 00:32:09.677 "aliases": [ 00:32:09.677 "lvs/lvol" 00:32:09.677 ], 00:32:09.677 "product_name": "Logical Volume", 00:32:09.677 "block_size": 4096, 00:32:09.677 "num_blocks": 38912, 00:32:09.677 "uuid": "9fce216d-5bc5-4b56-98b3-2384520e4ff6", 00:32:09.677 "assigned_rate_limits": { 00:32:09.677 "rw_ios_per_sec": 0, 00:32:09.677 "rw_mbytes_per_sec": 0, 00:32:09.677 "r_mbytes_per_sec": 0, 00:32:09.677 "w_mbytes_per_sec": 0 00:32:09.677 }, 00:32:09.677 "claimed": false, 00:32:09.677 "zoned": false, 00:32:09.677 "supported_io_types": { 00:32:09.677 "read": true, 00:32:09.677 "write": true, 00:32:09.677 "unmap": true, 00:32:09.677 "flush": false, 00:32:09.677 "reset": true, 00:32:09.677 "nvme_admin": false, 00:32:09.677 "nvme_io": false, 00:32:09.677 "nvme_io_md": false, 00:32:09.677 "write_zeroes": true, 00:32:09.677 "zcopy": false, 00:32:09.677 "get_zone_info": false, 00:32:09.677 "zone_management": false, 00:32:09.677 "zone_append": false, 00:32:09.677 "compare": false, 00:32:09.678 "compare_and_write": false, 00:32:09.678 "abort": false, 00:32:09.678 "seek_hole": true, 00:32:09.678 "seek_data": true, 00:32:09.678 "copy": false, 00:32:09.678 "nvme_iov_md": false 00:32:09.678 }, 00:32:09.678 "driver_specific": { 00:32:09.678 "lvol": { 00:32:09.678 "lvol_store_uuid": "0d145a7a-154f-4f0e-9f4e-e2163ea3d474", 00:32:09.678 "base_bdev": "aio_bdev", 00:32:09.678 "thin_provision": false, 00:32:09.678 "num_allocated_clusters": 38, 00:32:09.678 "snapshot": false, 00:32:09.678 "clone": false, 00:32:09.678 "esnap_clone": false 00:32:09.678 } 00:32:09.678 } 00:32:09.678 } 00:32:09.678 ] 00:32:09.678 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:09.678 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d145a7a-154f-4f0e-9f4e-e2163ea3d474 00:32:09.678 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:09.939 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:09.939 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d145a7a-154f-4f0e-9f4e-e2163ea3d474 00:32:09.939 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:10.200 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:10.200 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9fce216d-5bc5-4b56-98b3-2384520e4ff6 00:32:10.200 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0d145a7a-154f-4f0e-9f4e-e2163ea3d474 00:32:10.461 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:10.722 00:32:10.722 real 0m17.620s 00:32:10.722 user 0m35.640s 00:32:10.722 sys 0m3.028s 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:10.722 ************************************ 00:32:10.722 END TEST lvs_grow_dirty 00:32:10.722 ************************************ 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:10.722 nvmf_trace.0 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@122 -- # sync 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # set +e 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # for i in {1..20} 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:32:10.722 rmmod nvme_tcp 00:32:10.722 rmmod nvme_fabrics 00:32:10.722 rmmod nvme_keyring 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # set -e 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@130 -- # return 0 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 281865 ']' 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 281865 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 281865 ']' 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 281865 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:10.722 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 281865 00:32:10.984 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:10.984 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:10.984 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 281865' 00:32:10.984 killing process with pid 281865 00:32:10.984 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 281865 00:32:10.984 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 281865 00:32:10.984 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:10.984 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:10.984 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:10.984 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # iptr 00:32:10.984 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:32:10.984 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:10.984 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:32:10.984 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:10.984 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # remove_spdk_ns 00:32:10.984 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:10.984 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:10.984 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:13.535 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:32:13.535 00:32:13.535 real 0m44.949s 00:32:13.535 user 0m54.257s 00:32:13.535 sys 0m10.568s 00:32:13.535 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:13.535 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:13.535 ************************************ 00:32:13.535 END TEST nvmf_lvs_grow 00:32:13.535 ************************************ 00:32:13.535 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:13.535 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:13.535 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:13.535 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:13.535 ************************************ 00:32:13.535 START TEST nvmf_bdev_io_wait 00:32:13.535 ************************************ 00:32:13.535 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:13.535 * Looking for test storage... 00:32:13.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:13.535 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:13.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.536 --rc genhtml_branch_coverage=1 00:32:13.536 --rc genhtml_function_coverage=1 00:32:13.536 --rc genhtml_legend=1 00:32:13.536 --rc geninfo_all_blocks=1 00:32:13.536 --rc geninfo_unexecuted_blocks=1 00:32:13.536 00:32:13.536 ' 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:13.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.536 --rc genhtml_branch_coverage=1 00:32:13.536 --rc genhtml_function_coverage=1 00:32:13.536 --rc genhtml_legend=1 00:32:13.536 --rc geninfo_all_blocks=1 00:32:13.536 --rc geninfo_unexecuted_blocks=1 00:32:13.536 00:32:13.536 ' 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:13.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.536 --rc genhtml_branch_coverage=1 00:32:13.536 --rc genhtml_function_coverage=1 00:32:13.536 --rc genhtml_legend=1 00:32:13.536 --rc geninfo_all_blocks=1 00:32:13.536 --rc geninfo_unexecuted_blocks=1 00:32:13.536 00:32:13.536 ' 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:13.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.536 --rc genhtml_branch_coverage=1 00:32:13.536 --rc genhtml_function_coverage=1 00:32:13.536 --rc genhtml_legend=1 00:32:13.536 --rc geninfo_all_blocks=1 00:32:13.536 --rc geninfo_unexecuted_blocks=1 00:32:13.536 00:32:13.536 ' 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # : 0 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # '[' 1 -eq 1 ']' 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # NVMF_APP+=(--interrupt-mode) 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@56 -- # have_pci_nics=0 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # xtrace_disable 00:32:13.536 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:21.686 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:21.686 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_devs=() 00:32:21.686 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_devs 00:32:21.686 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_net_devs=() 00:32:21.686 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:32:21.686 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # pci_drivers=() 00:32:21.686 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # local -A pci_drivers 00:32:21.686 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # net_devs=() 00:32:21.686 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga net_devs 00:32:21.686 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # e810=() 00:32:21.686 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga e810 00:32:21.686 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # x722=() 00:32:21.686 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga x722 00:32:21.686 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # mlx=() 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # local -ga mlx 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:21.687 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:21.687 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:21.687 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:21.687 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:32:21.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:21.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:32:21.687 00:32:21.687 --- 10.0.0.2 ping statistics --- 00:32:21.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:21.687 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:21.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:21.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:32:21.687 00:32:21.687 --- 10.0.0.1 ping statistics --- 00:32:21.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:21.687 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:21.687 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:21.688 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:21.688 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:21.688 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:21.688 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:21.688 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:21.688 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:21.688 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:21.688 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:21.688 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=286890 00:32:21.688 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 286890 00:32:21.688 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:21.688 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 286890 ']' 00:32:21.688 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:21.688 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:21.688 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:21.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:21.688 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:21.688 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:21.688 [2024-12-09 12:07:28.594798] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:21.688 [2024-12-09 12:07:28.595949] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:32:21.688 [2024-12-09 12:07:28.596004] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:21.688 [2024-12-09 12:07:28.695786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:21.688 [2024-12-09 12:07:28.750360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:21.688 [2024-12-09 12:07:28.750414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:21.688 [2024-12-09 12:07:28.750422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:21.688 [2024-12-09 12:07:28.750429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:21.688 [2024-12-09 12:07:28.750435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:21.688 [2024-12-09 12:07:28.752444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:21.688 [2024-12-09 12:07:28.752572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:21.688 [2024-12-09 12:07:28.752711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:21.688 [2024-12-09 12:07:28.752737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:21.688 [2024-12-09 12:07:28.753190] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:21.688 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:21.688 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:32:21.688 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:21.688 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:21.688 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:21.688 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:21.688 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:21.688 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.688 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:21.688 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.688 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:21.688 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.688 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:21.688 [2024-12-09 12:07:29.517667] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:21.688 [2024-12-09 12:07:29.517989] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:21.688 [2024-12-09 12:07:29.518674] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:21.688 [2024-12-09 12:07:29.518720] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:21.688 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.688 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:21.688 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.688 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:21.688 [2024-12-09 12:07:29.529754] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:21.688 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.688 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:21.688 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.688 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:21.949 Malloc0 00:32:21.949 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.949 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:21.949 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.949 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:21.949 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.949 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:21.950 [2024-12-09 12:07:29.601846] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=286952 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=286955 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:21.950 { 00:32:21.950 "params": { 00:32:21.950 "name": "Nvme$subsystem", 00:32:21.950 "trtype": "$TEST_TRANSPORT", 00:32:21.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:21.950 "adrfam": "ipv4", 00:32:21.950 "trsvcid": "$NVMF_PORT", 00:32:21.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:21.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:21.950 "hdgst": ${hdgst:-false}, 00:32:21.950 "ddgst": ${ddgst:-false} 00:32:21.950 }, 00:32:21.950 "method": "bdev_nvme_attach_controller" 00:32:21.950 } 00:32:21.950 EOF 00:32:21.950 )") 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=286957 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:21.950 { 00:32:21.950 "params": { 00:32:21.950 "name": "Nvme$subsystem", 00:32:21.950 "trtype": "$TEST_TRANSPORT", 00:32:21.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:21.950 "adrfam": "ipv4", 00:32:21.950 "trsvcid": "$NVMF_PORT", 00:32:21.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:21.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:21.950 "hdgst": ${hdgst:-false}, 00:32:21.950 "ddgst": ${ddgst:-false} 00:32:21.950 }, 00:32:21.950 "method": "bdev_nvme_attach_controller" 00:32:21.950 } 00:32:21.950 EOF 00:32:21.950 )") 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=286960 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:21.950 { 00:32:21.950 "params": { 00:32:21.950 "name": "Nvme$subsystem", 00:32:21.950 "trtype": "$TEST_TRANSPORT", 00:32:21.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:21.950 "adrfam": "ipv4", 00:32:21.950 "trsvcid": "$NVMF_PORT", 00:32:21.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:21.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:21.950 "hdgst": ${hdgst:-false}, 00:32:21.950 "ddgst": ${ddgst:-false} 00:32:21.950 }, 00:32:21.950 "method": "bdev_nvme_attach_controller" 00:32:21.950 } 00:32:21.950 EOF 00:32:21.950 )") 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:21.950 { 00:32:21.950 "params": { 00:32:21.950 "name": "Nvme$subsystem", 00:32:21.950 "trtype": "$TEST_TRANSPORT", 00:32:21.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:21.950 "adrfam": "ipv4", 00:32:21.950 "trsvcid": "$NVMF_PORT", 00:32:21.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:21.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:21.950 "hdgst": ${hdgst:-false}, 00:32:21.950 "ddgst": ${ddgst:-false} 00:32:21.950 }, 00:32:21.950 "method": "bdev_nvme_attach_controller" 00:32:21.950 } 00:32:21.950 EOF 00:32:21.950 )") 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 286952 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:32:21.950 "params": { 00:32:21.950 "name": "Nvme1", 00:32:21.950 "trtype": "tcp", 00:32:21.950 "traddr": "10.0.0.2", 00:32:21.950 "adrfam": "ipv4", 00:32:21.950 "trsvcid": "4420", 00:32:21.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:21.950 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:21.950 "hdgst": false, 00:32:21.950 "ddgst": false 00:32:21.950 }, 00:32:21.950 "method": "bdev_nvme_attach_controller" 00:32:21.950 }' 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:32:21.950 "params": { 00:32:21.950 "name": "Nvme1", 00:32:21.950 "trtype": "tcp", 00:32:21.950 "traddr": "10.0.0.2", 00:32:21.950 "adrfam": "ipv4", 00:32:21.950 "trsvcid": "4420", 00:32:21.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:21.950 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:21.950 "hdgst": false, 00:32:21.950 "ddgst": false 00:32:21.950 }, 00:32:21.950 "method": "bdev_nvme_attach_controller" 00:32:21.950 }' 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:32:21.950 "params": { 00:32:21.950 "name": "Nvme1", 00:32:21.950 "trtype": "tcp", 00:32:21.950 "traddr": "10.0.0.2", 00:32:21.950 "adrfam": "ipv4", 00:32:21.950 "trsvcid": "4420", 00:32:21.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:21.950 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:21.950 "hdgst": false, 00:32:21.950 "ddgst": false 00:32:21.950 }, 00:32:21.950 "method": "bdev_nvme_attach_controller" 00:32:21.950 }' 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:32:21.950 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:32:21.950 "params": { 00:32:21.950 "name": "Nvme1", 00:32:21.950 "trtype": "tcp", 00:32:21.950 "traddr": "10.0.0.2", 00:32:21.950 "adrfam": "ipv4", 00:32:21.950 "trsvcid": "4420", 00:32:21.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:21.950 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:21.950 "hdgst": false, 00:32:21.950 "ddgst": false 00:32:21.950 }, 00:32:21.950 "method": "bdev_nvme_attach_controller" 00:32:21.950 }' 00:32:21.951 [2024-12-09 12:07:29.657864] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:32:21.951 [2024-12-09 12:07:29.657933] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:21.951 [2024-12-09 12:07:29.658073] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:32:21.951 [2024-12-09 12:07:29.658136] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:21.951 [2024-12-09 12:07:29.663448] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:32:21.951 [2024-12-09 12:07:29.663505] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:21.951 [2024-12-09 12:07:29.664387] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:32:21.951 [2024-12-09 12:07:29.664445] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:21.951 [2024-12-09 12:07:29.821822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.211 [2024-12-09 12:07:29.851357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:22.211 [2024-12-09 12:07:29.863596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.211 [2024-12-09 12:07:29.892352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:22.211 [2024-12-09 12:07:29.912831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.211 [2024-12-09 12:07:29.941686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:22.211 [2024-12-09 12:07:29.968544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.211 [2024-12-09 12:07:29.997725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:22.211 Running I/O for 1 seconds... 00:32:22.211 Running I/O for 1 seconds... 00:32:22.472 Running I/O for 1 seconds... 00:32:22.472 Running I/O for 1 seconds... 00:32:23.477 174024.00 IOPS, 679.78 MiB/s 00:32:23.477 Latency(us) 00:32:23.477 [2024-12-09T11:07:31.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.477 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:23.477 Nvme1n1 : 1.00 173655.86 678.34 0.00 0.00 732.80 307.20 2088.96 00:32:23.477 [2024-12-09T11:07:31.363Z] =================================================================================================================== 00:32:23.477 [2024-12-09T11:07:31.363Z] Total : 173655.86 678.34 0.00 0.00 732.80 307.20 2088.96 00:32:23.477 9537.00 IOPS, 37.25 MiB/s 00:32:23.477 Latency(us) 00:32:23.477 [2024-12-09T11:07:31.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.477 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:23.477 Nvme1n1 : 1.02 9538.59 37.26 0.00 0.00 13338.83 2430.29 24685.23 00:32:23.477 [2024-12-09T11:07:31.363Z] =================================================================================================================== 00:32:23.477 [2024-12-09T11:07:31.363Z] Total : 9538.59 37.26 0.00 0.00 13338.83 2430.29 24685.23 00:32:23.477 19542.00 IOPS, 76.34 MiB/s 00:32:23.477 Latency(us) 00:32:23.477 [2024-12-09T11:07:31.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.477 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:23.477 Nvme1n1 : 1.01 19601.10 76.57 0.00 0.00 6512.60 2048.00 10704.21 00:32:23.477 [2024-12-09T11:07:31.363Z] =================================================================================================================== 00:32:23.477 [2024-12-09T11:07:31.363Z] Total : 19601.10 76.57 0.00 0.00 6512.60 2048.00 10704.21 00:32:23.477 8773.00 IOPS, 34.27 MiB/s 00:32:23.477 Latency(us) 00:32:23.477 [2024-12-09T11:07:31.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.477 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:23.477 Nvme1n1 : 1.01 8862.95 34.62 0.00 0.00 14403.15 3713.71 30583.47 00:32:23.477 [2024-12-09T11:07:31.363Z] =================================================================================================================== 00:32:23.477 [2024-12-09T11:07:31.363Z] Total : 8862.95 34.62 0.00 0.00 14403.15 3713.71 30583.47 00:32:23.477 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 286955 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 286957 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 286960 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # sync 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # set +e 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # for i in {1..20} 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:32:23.738 rmmod nvme_tcp 00:32:23.738 rmmod nvme_fabrics 00:32:23.738 rmmod nvme_keyring 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # set -e 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@130 -- # return 0 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 286890 ']' 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 286890 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 286890 ']' 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 286890 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 286890 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 286890' 00:32:23.738 killing process with pid 286890 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 286890 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 286890 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # iptr 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # remove_spdk_ns 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:23.738 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:32:26.284 00:32:26.284 real 0m12.735s 00:32:26.284 user 0m14.891s 00:32:26.284 sys 0m7.380s 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:26.284 ************************************ 00:32:26.284 END TEST nvmf_bdev_io_wait 00:32:26.284 ************************************ 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:26.284 ************************************ 00:32:26.284 START TEST nvmf_queue_depth 00:32:26.284 ************************************ 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:26.284 * Looking for test storage... 00:32:26.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:26.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.284 --rc genhtml_branch_coverage=1 00:32:26.284 --rc genhtml_function_coverage=1 00:32:26.284 --rc genhtml_legend=1 00:32:26.284 --rc geninfo_all_blocks=1 00:32:26.284 --rc geninfo_unexecuted_blocks=1 00:32:26.284 00:32:26.284 ' 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:26.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.284 --rc genhtml_branch_coverage=1 00:32:26.284 --rc genhtml_function_coverage=1 00:32:26.284 --rc genhtml_legend=1 00:32:26.284 --rc geninfo_all_blocks=1 00:32:26.284 --rc geninfo_unexecuted_blocks=1 00:32:26.284 00:32:26.284 ' 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:26.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.284 --rc genhtml_branch_coverage=1 00:32:26.284 --rc genhtml_function_coverage=1 00:32:26.284 --rc genhtml_legend=1 00:32:26.284 --rc geninfo_all_blocks=1 00:32:26.284 --rc geninfo_unexecuted_blocks=1 00:32:26.284 00:32:26.284 ' 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:26.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.284 --rc genhtml_branch_coverage=1 00:32:26.284 --rc genhtml_function_coverage=1 00:32:26.284 --rc genhtml_legend=1 00:32:26.284 --rc geninfo_all_blocks=1 00:32:26.284 --rc geninfo_unexecuted_blocks=1 00:32:26.284 00:32:26.284 ' 00:32:26.284 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:26.285 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:26.285 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:26.285 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:26.285 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:26.285 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:26.285 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:26.285 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:26.285 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:26.285 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:26.285 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:26.285 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:26.285 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:26.285 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:26.285 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:26.285 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:26.285 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:26.285 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:26.285 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:32:26.285 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:26.285 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # : 0 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # '[' 1 -eq 1 ']' 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@35 -- # NVMF_APP+=(--interrupt-mode) 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@56 -- # have_pci_nics=0 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@310 -- # xtrace_disable 00:32:26.285 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_devs=() 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_devs 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_net_devs=() 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@318 -- # pci_drivers=() 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@318 -- # local -A pci_drivers 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # net_devs=() 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga net_devs 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # e810=() 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga e810 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # x722=() 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga x722 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@323 -- # mlx=() 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@323 -- # local -ga mlx 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:34.428 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:32:34.428 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:34.429 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:34.429 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:34.429 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:32:34.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:34.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:32:34.429 00:32:34.429 --- 10.0.0.2 ping statistics --- 00:32:34.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.429 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:34.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:34.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:32:34.429 00:32:34.429 --- 10.0.0.1 ping statistics --- 00:32:34.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.429 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=291626 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 291626 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 291626 ']' 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:34.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:34.429 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:34.429 [2024-12-09 12:07:41.475368] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:34.429 [2024-12-09 12:07:41.476525] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:32:34.429 [2024-12-09 12:07:41.476579] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:34.429 [2024-12-09 12:07:41.580227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.429 [2024-12-09 12:07:41.630411] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:34.429 [2024-12-09 12:07:41.630463] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:34.429 [2024-12-09 12:07:41.630472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:34.430 [2024-12-09 12:07:41.630479] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:34.430 [2024-12-09 12:07:41.630485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:34.430 [2024-12-09 12:07:41.631261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.430 [2024-12-09 12:07:41.708049] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:34.430 [2024-12-09 12:07:41.708331] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:34.430 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:34.430 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:32:34.430 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:34.430 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:34.430 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:34.691 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:34.691 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:34.691 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.691 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:34.692 [2024-12-09 12:07:42.336105] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:34.692 Malloc0 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:34.692 [2024-12-09 12:07:42.420218] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=291673 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 291673 /var/tmp/bdevperf.sock 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 291673 ']' 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:34.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:34.692 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:34.692 [2024-12-09 12:07:42.476534] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:32:34.692 [2024-12-09 12:07:42.476601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291673 ] 00:32:34.692 [2024-12-09 12:07:42.569588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.953 [2024-12-09 12:07:42.623204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.526 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:35.527 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:32:35.527 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:35.527 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.527 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:35.788 NVMe0n1 00:32:35.788 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.788 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:35.788 Running I/O for 10 seconds... 00:32:38.121 8192.00 IOPS, 32.00 MiB/s [2024-12-09T11:07:46.952Z] 8704.00 IOPS, 34.00 MiB/s [2024-12-09T11:07:47.895Z] 8744.00 IOPS, 34.16 MiB/s [2024-12-09T11:07:48.839Z] 9234.75 IOPS, 36.07 MiB/s [2024-12-09T11:07:49.781Z] 10104.20 IOPS, 39.47 MiB/s [2024-12-09T11:07:50.726Z] 10747.33 IOPS, 41.98 MiB/s [2024-12-09T11:07:51.668Z] 11119.29 IOPS, 43.43 MiB/s [2024-12-09T11:07:52.612Z] 11427.75 IOPS, 44.64 MiB/s [2024-12-09T11:07:53.997Z] 11704.67 IOPS, 45.72 MiB/s [2024-12-09T11:07:53.997Z] 11883.70 IOPS, 46.42 MiB/s 00:32:46.111 Latency(us) 00:32:46.111 [2024-12-09T11:07:53.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.111 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:32:46.111 Verification LBA range: start 0x0 length 0x4000 00:32:46.111 NVMe0n1 : 10.05 11925.44 46.58 0.00 0.00 85588.67 17476.27 76458.67 00:32:46.111 [2024-12-09T11:07:53.997Z] =================================================================================================================== 00:32:46.111 [2024-12-09T11:07:53.997Z] Total : 11925.44 46.58 0.00 0.00 85588.67 17476.27 76458.67 00:32:46.111 { 00:32:46.111 "results": [ 00:32:46.111 { 00:32:46.111 "job": "NVMe0n1", 00:32:46.111 "core_mask": "0x1", 00:32:46.111 "workload": "verify", 00:32:46.111 "status": "finished", 00:32:46.111 "verify_range": { 00:32:46.111 "start": 0, 00:32:46.111 "length": 16384 00:32:46.111 }, 00:32:46.111 "queue_depth": 1024, 00:32:46.111 "io_size": 4096, 00:32:46.111 "runtime": 10.050864, 00:32:46.111 "iops": 11925.442429625951, 00:32:46.111 "mibps": 46.58375949072637, 00:32:46.111 "io_failed": 0, 00:32:46.111 "io_timeout": 0, 00:32:46.111 "avg_latency_us": 85588.67131849947, 00:32:46.111 "min_latency_us": 17476.266666666666, 00:32:46.111 "max_latency_us": 76458.66666666667 00:32:46.111 } 00:32:46.111 ], 00:32:46.111 "core_count": 1 00:32:46.111 } 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 291673 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 291673 ']' 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 291673 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 291673 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 291673' 00:32:46.111 killing process with pid 291673 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 291673 00:32:46.111 Received shutdown signal, test time was about 10.000000 seconds 00:32:46.111 00:32:46.111 Latency(us) 00:32:46.111 [2024-12-09T11:07:53.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.111 [2024-12-09T11:07:53.997Z] =================================================================================================================== 00:32:46.111 [2024-12-09T11:07:53.997Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 291673 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@122 -- # sync 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # set +e 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # for i in {1..20} 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:32:46.111 rmmod nvme_tcp 00:32:46.111 rmmod nvme_fabrics 00:32:46.111 rmmod nvme_keyring 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # set -e 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@130 -- # return 0 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 291626 ']' 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 291626 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 291626 ']' 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 291626 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:46.111 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 291626 00:32:46.372 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:46.372 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:46.372 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 291626' 00:32:46.372 killing process with pid 291626 00:32:46.372 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 291626 00:32:46.372 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 291626 00:32:46.372 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:46.372 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:46.372 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:46.372 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # iptr 00:32:46.372 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:32:46.372 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:46.372 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:32:46.372 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:46.372 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # remove_spdk_ns 00:32:46.372 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:46.372 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:46.372 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:32:48.923 00:32:48.923 real 0m22.422s 00:32:48.923 user 0m24.545s 00:32:48.923 sys 0m7.549s 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:48.923 ************************************ 00:32:48.923 END TEST nvmf_queue_depth 00:32:48.923 ************************************ 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:48.923 ************************************ 00:32:48.923 START TEST nvmf_target_multipath 00:32:48.923 ************************************ 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:48.923 * Looking for test storage... 00:32:48.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:48.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.923 --rc genhtml_branch_coverage=1 00:32:48.923 --rc genhtml_function_coverage=1 00:32:48.923 --rc genhtml_legend=1 00:32:48.923 --rc geninfo_all_blocks=1 00:32:48.923 --rc geninfo_unexecuted_blocks=1 00:32:48.923 00:32:48.923 ' 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:48.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.923 --rc genhtml_branch_coverage=1 00:32:48.923 --rc genhtml_function_coverage=1 00:32:48.923 --rc genhtml_legend=1 00:32:48.923 --rc geninfo_all_blocks=1 00:32:48.923 --rc geninfo_unexecuted_blocks=1 00:32:48.923 00:32:48.923 ' 00:32:48.923 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:48.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.923 --rc genhtml_branch_coverage=1 00:32:48.923 --rc genhtml_function_coverage=1 00:32:48.923 --rc genhtml_legend=1 00:32:48.923 --rc geninfo_all_blocks=1 00:32:48.923 --rc geninfo_unexecuted_blocks=1 00:32:48.923 00:32:48.924 ' 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:48.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.924 --rc genhtml_branch_coverage=1 00:32:48.924 --rc genhtml_function_coverage=1 00:32:48.924 --rc genhtml_legend=1 00:32:48.924 --rc geninfo_all_blocks=1 00:32:48.924 --rc geninfo_unexecuted_blocks=1 00:32:48.924 00:32:48.924 ' 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # : 0 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # '[' 1 -eq 1 ']' 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@35 -- # NVMF_APP+=(--interrupt-mode) 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@56 -- # have_pci_nics=0 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@310 -- # xtrace_disable 00:32:48.924 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_devs=() 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_devs 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_net_devs=() 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@318 -- # pci_drivers=() 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@318 -- # local -A pci_drivers 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # net_devs=() 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga net_devs 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # e810=() 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga e810 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # x722=() 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga x722 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@323 -- # mlx=() 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@323 -- # local -ga mlx 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:57.072 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:57.072 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:57.072 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:57.072 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:57.072 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:32:57.073 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:32:57.073 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:32:57.073 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:57.073 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:57.073 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:57.073 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:32:57.073 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:57.073 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:57.073 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:57.073 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:57.073 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:32:57.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:57.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:32:57.073 00:32:57.073 --- 10.0.0.2 ping statistics --- 00:32:57.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.073 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:32:57.073 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:57.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:57.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:32:57.073 00:32:57.073 --- 10.0.0.1 ping statistics --- 00:32:57.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.073 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:32:57.073 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:57.073 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:32:57.073 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:57.073 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:57.073 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:57.073 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:57.073 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:57.073 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:57.073 12:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:32:57.073 only one NIC for nvmf test 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@122 -- # sync 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # set +e 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # for i in {1..20} 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:32:57.073 rmmod nvme_tcp 00:32:57.073 rmmod nvme_fabrics 00:32:57.073 rmmod nvme_keyring 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # set -e 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@130 -- # return 0 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # iptr 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # remove_spdk_ns 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:57.073 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@122 -- # sync 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # set +e 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # for i in {1..20} 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # set -e 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@130 -- # return 0 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # iptr 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # remove_spdk_ns 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:32:58.459 00:32:58.459 real 0m9.968s 00:32:58.459 user 0m2.172s 00:32:58.459 sys 0m5.745s 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:58.459 ************************************ 00:32:58.459 END TEST nvmf_target_multipath 00:32:58.459 ************************************ 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:58.459 ************************************ 00:32:58.459 START TEST nvmf_zcopy 00:32:58.459 ************************************ 00:32:58.459 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:58.721 * Looking for test storage... 00:32:58.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:58.721 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:58.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.722 --rc genhtml_branch_coverage=1 00:32:58.722 --rc genhtml_function_coverage=1 00:32:58.722 --rc genhtml_legend=1 00:32:58.722 --rc geninfo_all_blocks=1 00:32:58.722 --rc geninfo_unexecuted_blocks=1 00:32:58.722 00:32:58.722 ' 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:58.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.722 --rc genhtml_branch_coverage=1 00:32:58.722 --rc genhtml_function_coverage=1 00:32:58.722 --rc genhtml_legend=1 00:32:58.722 --rc geninfo_all_blocks=1 00:32:58.722 --rc geninfo_unexecuted_blocks=1 00:32:58.722 00:32:58.722 ' 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:58.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.722 --rc genhtml_branch_coverage=1 00:32:58.722 --rc genhtml_function_coverage=1 00:32:58.722 --rc genhtml_legend=1 00:32:58.722 --rc geninfo_all_blocks=1 00:32:58.722 --rc geninfo_unexecuted_blocks=1 00:32:58.722 00:32:58.722 ' 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:58.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.722 --rc genhtml_branch_coverage=1 00:32:58.722 --rc genhtml_function_coverage=1 00:32:58.722 --rc genhtml_legend=1 00:32:58.722 --rc geninfo_all_blocks=1 00:32:58.722 --rc geninfo_unexecuted_blocks=1 00:32:58.722 00:32:58.722 ' 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # : 0 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # '[' 1 -eq 1 ']' 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@35 -- # NVMF_APP+=(--interrupt-mode) 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@56 -- # have_pci_nics=0 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@310 -- # xtrace_disable 00:32:58.722 12:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_devs=() 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_devs 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_net_devs=() 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@318 -- # pci_drivers=() 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@318 -- # local -A pci_drivers 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # net_devs=() 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga net_devs 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # e810=() 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga e810 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # x722=() 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga x722 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@323 -- # mlx=() 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@323 -- # local -ga mlx 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:06.869 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:06.869 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.869 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:06.870 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:06.870 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:33:06.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:06.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:33:06.870 00:33:06.870 --- 10.0.0.2 ping statistics --- 00:33:06.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.870 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:06.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:06.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:33:06.870 00:33:06.870 --- 10.0.0.1 ping statistics --- 00:33:06.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.870 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:06.870 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:06.870 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:06.870 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:06.870 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:06.870 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:06.870 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=302151 00:33:06.870 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 302151 00:33:06.870 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:06.870 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 302151 ']' 00:33:06.870 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:06.870 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:06.870 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:06.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:06.870 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:06.870 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:06.870 [2024-12-09 12:08:14.093385] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:06.870 [2024-12-09 12:08:14.094527] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:33:06.870 [2024-12-09 12:08:14.094581] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:06.870 [2024-12-09 12:08:14.192610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.870 [2024-12-09 12:08:14.243072] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:06.870 [2024-12-09 12:08:14.243125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:06.870 [2024-12-09 12:08:14.243134] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:06.870 [2024-12-09 12:08:14.243141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:06.870 [2024-12-09 12:08:14.243153] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:06.870 [2024-12-09 12:08:14.243862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:06.870 [2024-12-09 12:08:14.321405] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:06.870 [2024-12-09 12:08:14.321694] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:07.130 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:07.130 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:33:07.130 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:07.131 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:07.131 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:07.131 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:07.131 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:07.131 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:07.131 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.131 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:07.131 [2024-12-09 12:08:14.960730] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:07.131 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.131 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:07.131 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.131 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:07.131 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.131 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:07.131 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.131 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:07.131 [2024-12-09 12:08:14.980879] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:07.131 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.131 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:07.131 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.131 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:07.131 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.131 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:07.131 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.131 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:07.131 malloc0 00:33:07.131 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.131 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:07.131 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.131 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:07.391 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.391 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:07.391 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:07.391 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:33:07.391 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:33:07.391 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:33:07.391 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:33:07.391 { 00:33:07.391 "params": { 00:33:07.391 "name": "Nvme$subsystem", 00:33:07.391 "trtype": "$TEST_TRANSPORT", 00:33:07.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:07.391 "adrfam": "ipv4", 00:33:07.391 "trsvcid": "$NVMF_PORT", 00:33:07.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:07.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:07.391 "hdgst": ${hdgst:-false}, 00:33:07.391 "ddgst": ${ddgst:-false} 00:33:07.391 }, 00:33:07.391 "method": "bdev_nvme_attach_controller" 00:33:07.391 } 00:33:07.391 EOF 00:33:07.391 )") 00:33:07.391 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:33:07.391 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:33:07.391 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:33:07.391 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:33:07.391 "params": { 00:33:07.391 "name": "Nvme1", 00:33:07.391 "trtype": "tcp", 00:33:07.391 "traddr": "10.0.0.2", 00:33:07.391 "adrfam": "ipv4", 00:33:07.391 "trsvcid": "4420", 00:33:07.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:07.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:07.391 "hdgst": false, 00:33:07.391 "ddgst": false 00:33:07.391 }, 00:33:07.391 "method": "bdev_nvme_attach_controller" 00:33:07.391 }' 00:33:07.391 [2024-12-09 12:08:15.067303] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:33:07.391 [2024-12-09 12:08:15.067364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid302330 ] 00:33:07.391 [2024-12-09 12:08:15.161197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.391 [2024-12-09 12:08:15.213552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.651 Running I/O for 10 seconds... 00:33:09.536 6399.00 IOPS, 49.99 MiB/s [2024-12-09T11:08:18.808Z] 6459.50 IOPS, 50.46 MiB/s [2024-12-09T11:08:19.751Z] 6485.33 IOPS, 50.67 MiB/s [2024-12-09T11:08:20.693Z] 6502.25 IOPS, 50.80 MiB/s [2024-12-09T11:08:21.634Z] 7001.80 IOPS, 54.70 MiB/s [2024-12-09T11:08:22.676Z] 7453.50 IOPS, 58.23 MiB/s [2024-12-09T11:08:23.421Z] 7778.29 IOPS, 60.77 MiB/s [2024-12-09T11:08:24.805Z] 8017.38 IOPS, 62.64 MiB/s [2024-12-09T11:08:25.746Z] 8202.11 IOPS, 64.08 MiB/s [2024-12-09T11:08:25.746Z] 8350.80 IOPS, 65.24 MiB/s 00:33:17.860 Latency(us) 00:33:17.860 [2024-12-09T11:08:25.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:17.860 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:17.860 Verification LBA range: start 0x0 length 0x1000 00:33:17.860 Nvme1n1 : 10.01 8353.78 65.26 0.00 0.00 15273.33 2157.23 27306.67 00:33:17.860 [2024-12-09T11:08:25.746Z] =================================================================================================================== 00:33:17.860 [2024-12-09T11:08:25.746Z] Total : 8353.78 65.26 0.00 0.00 15273.33 2157.23 27306.67 00:33:17.860 12:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=304337 00:33:17.860 12:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:17.860 12:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:17.860 12:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:17.860 12:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:17.860 12:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:33:17.860 12:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:33:17.860 12:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:33:17.860 12:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:33:17.860 { 00:33:17.860 "params": { 00:33:17.860 "name": "Nvme$subsystem", 00:33:17.860 "trtype": "$TEST_TRANSPORT", 00:33:17.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:17.860 "adrfam": "ipv4", 00:33:17.860 "trsvcid": "$NVMF_PORT", 00:33:17.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:17.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:17.860 "hdgst": ${hdgst:-false}, 00:33:17.860 "ddgst": ${ddgst:-false} 00:33:17.860 }, 00:33:17.860 "method": "bdev_nvme_attach_controller" 00:33:17.860 } 00:33:17.860 EOF 00:33:17.860 )") 00:33:17.860 [2024-12-09 12:08:25.516269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.860 [2024-12-09 12:08:25.516297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.860 12:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:33:17.860 12:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:33:17.860 [2024-12-09 12:08:25.524242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.860 [2024-12-09 12:08:25.524250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.860 12:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:33:17.860 12:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:33:17.860 "params": { 00:33:17.860 "name": "Nvme1", 00:33:17.860 "trtype": "tcp", 00:33:17.860 "traddr": "10.0.0.2", 00:33:17.860 "adrfam": "ipv4", 00:33:17.860 "trsvcid": "4420", 00:33:17.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:17.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:17.860 "hdgst": false, 00:33:17.860 "ddgst": false 00:33:17.860 }, 00:33:17.860 "method": "bdev_nvme_attach_controller" 00:33:17.860 }' 00:33:17.860 [2024-12-09 12:08:25.532241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.860 [2024-12-09 12:08:25.532249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.860 [2024-12-09 12:08:25.540240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.860 [2024-12-09 12:08:25.540248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.860 [2024-12-09 12:08:25.548240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.860 [2024-12-09 12:08:25.548247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.860 [2024-12-09 12:08:25.560240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.860 [2024-12-09 12:08:25.560247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.860 [2024-12-09 12:08:25.561939] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:33:17.860 [2024-12-09 12:08:25.561985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid304337 ] 00:33:17.860 [2024-12-09 12:08:25.568240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.860 [2024-12-09 12:08:25.568247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.860 [2024-12-09 12:08:25.576240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.860 [2024-12-09 12:08:25.576247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.860 [2024-12-09 12:08:25.584240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.860 [2024-12-09 12:08:25.584246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.860 [2024-12-09 12:08:25.592240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.860 [2024-12-09 12:08:25.592247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.860 [2024-12-09 12:08:25.600241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.860 [2024-12-09 12:08:25.600248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.860 [2024-12-09 12:08:25.608240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.860 [2024-12-09 12:08:25.608247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.860 [2024-12-09 12:08:25.616239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.860 [2024-12-09 12:08:25.616246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.860 [2024-12-09 12:08:25.624240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.860 [2024-12-09 12:08:25.624246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.860 [2024-12-09 12:08:25.632240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.860 [2024-12-09 12:08:25.632246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.860 [2024-12-09 12:08:25.640240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.860 [2024-12-09 12:08:25.640246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.861 [2024-12-09 12:08:25.643849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.861 [2024-12-09 12:08:25.648241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.861 [2024-12-09 12:08:25.648248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.861 [2024-12-09 12:08:25.656241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.861 [2024-12-09 12:08:25.656248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.861 [2024-12-09 12:08:25.664240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.861 [2024-12-09 12:08:25.664247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.861 [2024-12-09 12:08:25.672241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.861 [2024-12-09 12:08:25.672250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.861 [2024-12-09 12:08:25.673116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.861 [2024-12-09 12:08:25.680240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.861 [2024-12-09 12:08:25.680247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.861 [2024-12-09 12:08:25.688244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.861 [2024-12-09 12:08:25.688256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.861 [2024-12-09 12:08:25.696243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.861 [2024-12-09 12:08:25.696254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.861 [2024-12-09 12:08:25.704242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.861 [2024-12-09 12:08:25.704253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.861 [2024-12-09 12:08:25.712240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.861 [2024-12-09 12:08:25.712248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.861 [2024-12-09 12:08:25.720241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.861 [2024-12-09 12:08:25.720250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.861 [2024-12-09 12:08:25.728240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.861 [2024-12-09 12:08:25.728247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.861 [2024-12-09 12:08:25.736240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.861 [2024-12-09 12:08:25.736246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:17.861 [2024-12-09 12:08:25.744247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:17.861 [2024-12-09 12:08:25.744262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.121 [2024-12-09 12:08:25.752241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.121 [2024-12-09 12:08:25.752250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.121 [2024-12-09 12:08:25.760241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.121 [2024-12-09 12:08:25.760249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.121 [2024-12-09 12:08:25.768242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.121 [2024-12-09 12:08:25.768251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.121 [2024-12-09 12:08:25.776242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.121 [2024-12-09 12:08:25.776249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.121 [2024-12-09 12:08:25.784240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.121 [2024-12-09 12:08:25.784247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.121 [2024-12-09 12:08:25.792240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.121 [2024-12-09 12:08:25.792247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.121 [2024-12-09 12:08:25.800240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.121 [2024-12-09 12:08:25.800247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.121 [2024-12-09 12:08:25.808241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.121 [2024-12-09 12:08:25.808248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.121 [2024-12-09 12:08:25.816240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.121 [2024-12-09 12:08:25.816248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.121 [2024-12-09 12:08:25.824240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.121 [2024-12-09 12:08:25.824248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.121 [2024-12-09 12:08:25.832240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.121 [2024-12-09 12:08:25.832246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.121 [2024-12-09 12:08:25.840240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.121 [2024-12-09 12:08:25.840246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.121 [2024-12-09 12:08:25.848240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.121 [2024-12-09 12:08:25.848247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.121 [2024-12-09 12:08:25.856240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.121 [2024-12-09 12:08:25.856246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.121 [2024-12-09 12:08:25.864241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.121 [2024-12-09 12:08:25.864249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.121 [2024-12-09 12:08:25.872240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.121 [2024-12-09 12:08:25.872249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.121 [2024-12-09 12:08:25.880240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.121 [2024-12-09 12:08:25.880247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.121 [2024-12-09 12:08:25.888240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.121 [2024-12-09 12:08:25.888246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.121 [2024-12-09 12:08:25.896240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.121 [2024-12-09 12:08:25.896246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.121 [2024-12-09 12:08:25.904239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.121 [2024-12-09 12:08:25.904246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.121 [2024-12-09 12:08:25.912241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.121 [2024-12-09 12:08:25.912248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.121 [2024-12-09 12:08:25.920240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.121 [2024-12-09 12:08:25.920246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.122 [2024-12-09 12:08:25.928239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.122 [2024-12-09 12:08:25.928246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.122 [2024-12-09 12:08:25.936240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.122 [2024-12-09 12:08:25.936246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.122 [2024-12-09 12:08:25.944240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.122 [2024-12-09 12:08:25.944246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.122 [2024-12-09 12:08:25.952240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.122 [2024-12-09 12:08:25.952247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.122 [2024-12-09 12:08:25.960353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.122 [2024-12-09 12:08:25.960367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.122 [2024-12-09 12:08:25.968243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.122 [2024-12-09 12:08:25.968252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.122 Running I/O for 5 seconds... 00:33:18.122 [2024-12-09 12:08:25.981300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.122 [2024-12-09 12:08:25.981316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.122 [2024-12-09 12:08:25.991716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.122 [2024-12-09 12:08:25.991732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.122 [2024-12-09 12:08:26.004579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.122 [2024-12-09 12:08:26.004594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.382 [2024-12-09 12:08:26.016992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.382 [2024-12-09 12:08:26.017009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.382 [2024-12-09 12:08:26.029424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.382 [2024-12-09 12:08:26.029439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.382 [2024-12-09 12:08:26.040380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.382 [2024-12-09 12:08:26.040395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.382 [2024-12-09 12:08:26.046465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.382 [2024-12-09 12:08:26.046483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.382 [2024-12-09 12:08:26.059074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.382 [2024-12-09 12:08:26.059088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.382 [2024-12-09 12:08:26.071756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.382 [2024-12-09 12:08:26.071770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.382 [2024-12-09 12:08:26.084383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.382 [2024-12-09 12:08:26.084398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.382 [2024-12-09 12:08:26.090841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.382 [2024-12-09 12:08:26.090855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.382 [2024-12-09 12:08:26.103837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.382 [2024-12-09 12:08:26.103851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.382 [2024-12-09 12:08:26.117146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.382 [2024-12-09 12:08:26.117160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.382 [2024-12-09 12:08:26.128239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.382 [2024-12-09 12:08:26.128253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.382 [2024-12-09 12:08:26.134245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.382 [2024-12-09 12:08:26.134259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.382 [2024-12-09 12:08:26.142996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.382 [2024-12-09 12:08:26.143010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.382 [2024-12-09 12:08:26.155693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.382 [2024-12-09 12:08:26.155707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.382 [2024-12-09 12:08:26.168894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.382 [2024-12-09 12:08:26.168908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.382 [2024-12-09 12:08:26.180790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.382 [2024-12-09 12:08:26.180804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.382 [2024-12-09 12:08:26.192984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.382 [2024-12-09 12:08:26.192998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.382 [2024-12-09 12:08:26.205293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.382 [2024-12-09 12:08:26.205308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.382 [2024-12-09 12:08:26.217673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.382 [2024-12-09 12:08:26.217688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.382 [2024-12-09 12:08:26.228385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.382 [2024-12-09 12:08:26.228400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.383 [2024-12-09 12:08:26.234428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.383 [2024-12-09 12:08:26.234443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.383 [2024-12-09 12:08:26.247909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.383 [2024-12-09 12:08:26.247924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.383 [2024-12-09 12:08:26.260958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.383 [2024-12-09 12:08:26.260980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.643 [2024-12-09 12:08:26.273501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.643 [2024-12-09 12:08:26.273516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.643 [2024-12-09 12:08:26.284468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.643 [2024-12-09 12:08:26.284483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.643 [2024-12-09 12:08:26.290476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.643 [2024-12-09 12:08:26.290491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.643 [2024-12-09 12:08:26.303709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.643 [2024-12-09 12:08:26.303724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.643 [2024-12-09 12:08:26.316837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.643 [2024-12-09 12:08:26.316851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.643 [2024-12-09 12:08:26.329370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.643 [2024-12-09 12:08:26.329385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.644 [2024-12-09 12:08:26.341626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.644 [2024-12-09 12:08:26.341645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.644 [2024-12-09 12:08:26.353240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.644 [2024-12-09 12:08:26.353256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.644 [2024-12-09 12:08:26.365687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.644 [2024-12-09 12:08:26.365702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.644 [2024-12-09 12:08:26.376077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.644 [2024-12-09 12:08:26.376092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.644 [2024-12-09 12:08:26.389219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.644 [2024-12-09 12:08:26.389233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.644 [2024-12-09 12:08:26.401224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.644 [2024-12-09 12:08:26.401239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.644 [2024-12-09 12:08:26.412495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.644 [2024-12-09 12:08:26.412509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.644 [2024-12-09 12:08:26.425656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.644 [2024-12-09 12:08:26.425670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.644 [2024-12-09 12:08:26.436511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.644 [2024-12-09 12:08:26.436525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.644 [2024-12-09 12:08:26.449311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.644 [2024-12-09 12:08:26.449326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.644 [2024-12-09 12:08:26.461022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.644 [2024-12-09 12:08:26.461037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.644 [2024-12-09 12:08:26.473649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.644 [2024-12-09 12:08:26.473664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.644 [2024-12-09 12:08:26.485274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.644 [2024-12-09 12:08:26.485289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.644 [2024-12-09 12:08:26.496959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.644 [2024-12-09 12:08:26.496974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.644 [2024-12-09 12:08:26.508607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.644 [2024-12-09 12:08:26.508621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.644 [2024-12-09 12:08:26.521370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.644 [2024-12-09 12:08:26.521384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.531823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.531839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.544887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.544901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.557709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.557724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.568084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.568099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.581256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.581270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.592276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.592291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.598448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.598462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.607303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.607317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.620404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.620419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.626929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.626944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.639873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.639888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.652342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.652357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.658688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.658703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.667624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.667642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.680750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.680764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.693285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.693299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.704462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.704477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.710537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.710551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.723597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.723612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.736719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.736734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.749208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.749222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.761661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.761676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.771256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.771270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:18.905 [2024-12-09 12:08:26.784524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:18.905 [2024-12-09 12:08:26.784539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.167 [2024-12-09 12:08:26.797044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.167 [2024-12-09 12:08:26.797059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.167 [2024-12-09 12:08:26.808863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.167 [2024-12-09 12:08:26.808877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.167 [2024-12-09 12:08:26.820848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.167 [2024-12-09 12:08:26.820862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.167 [2024-12-09 12:08:26.833668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.167 [2024-12-09 12:08:26.833683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.167 [2024-12-09 12:08:26.844066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.167 [2024-12-09 12:08:26.844080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.168 [2024-12-09 12:08:26.856908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.168 [2024-12-09 12:08:26.856922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.168 [2024-12-09 12:08:26.869181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.168 [2024-12-09 12:08:26.869195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.168 [2024-12-09 12:08:26.881721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.168 [2024-12-09 12:08:26.881736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.168 [2024-12-09 12:08:26.891878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.168 [2024-12-09 12:08:26.891893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.168 [2024-12-09 12:08:26.904935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.168 [2024-12-09 12:08:26.904949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.168 [2024-12-09 12:08:26.916967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.168 [2024-12-09 12:08:26.916982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.168 [2024-12-09 12:08:26.929274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.168 [2024-12-09 12:08:26.929288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.168 [2024-12-09 12:08:26.941300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.168 [2024-12-09 12:08:26.941315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.168 [2024-12-09 12:08:26.953686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.168 [2024-12-09 12:08:26.953701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.168 [2024-12-09 12:08:26.965267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.168 [2024-12-09 12:08:26.965282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.168 19026.00 IOPS, 148.64 MiB/s [2024-12-09T11:08:27.054Z] [2024-12-09 12:08:26.977480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.168 [2024-12-09 12:08:26.977495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.168 [2024-12-09 12:08:26.988382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.168 [2024-12-09 12:08:26.988397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.168 [2024-12-09 12:08:26.994466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.168 [2024-12-09 12:08:26.994480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.168 [2024-12-09 12:08:27.003285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.168 [2024-12-09 12:08:27.003300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.168 [2024-12-09 12:08:27.016018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.168 [2024-12-09 12:08:27.016033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.168 [2024-12-09 12:08:27.029298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.168 [2024-12-09 12:08:27.029312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.168 [2024-12-09 12:08:27.041528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.168 [2024-12-09 12:08:27.041542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.429 [2024-12-09 12:08:27.052507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.429 [2024-12-09 12:08:27.052521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.429 [2024-12-09 12:08:27.065718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.429 [2024-12-09 12:08:27.065732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.429 [2024-12-09 12:08:27.076909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.429 [2024-12-09 12:08:27.076923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.429 [2024-12-09 12:08:27.089634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.429 [2024-12-09 12:08:27.089652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.429 [2024-12-09 12:08:27.100376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.429 [2024-12-09 12:08:27.100391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.429 [2024-12-09 12:08:27.106378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.429 [2024-12-09 12:08:27.106392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.429 [2024-12-09 12:08:27.115228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.429 [2024-12-09 12:08:27.115246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.429 [2024-12-09 12:08:27.128182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.429 [2024-12-09 12:08:27.128197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.429 [2024-12-09 12:08:27.141576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.429 [2024-12-09 12:08:27.141590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.429 [2024-12-09 12:08:27.152280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.429 [2024-12-09 12:08:27.152295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.429 [2024-12-09 12:08:27.158201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.429 [2024-12-09 12:08:27.158215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.429 [2024-12-09 12:08:27.167109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.429 [2024-12-09 12:08:27.167123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.429 [2024-12-09 12:08:27.179725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.429 [2024-12-09 12:08:27.179739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.429 [2024-12-09 12:08:27.192245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.429 [2024-12-09 12:08:27.192260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.429 [2024-12-09 12:08:27.198426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.429 [2024-12-09 12:08:27.198440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.429 [2024-12-09 12:08:27.207161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.429 [2024-12-09 12:08:27.207175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.429 [2024-12-09 12:08:27.220066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.429 [2024-12-09 12:08:27.220081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.429 [2024-12-09 12:08:27.232939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.429 [2024-12-09 12:08:27.232953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.429 [2024-12-09 12:08:27.245420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.429 [2024-12-09 12:08:27.245434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.429 [2024-12-09 12:08:27.256173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.429 [2024-12-09 12:08:27.256188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.429 [2024-12-09 12:08:27.269016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.429 [2024-12-09 12:08:27.269031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.430 [2024-12-09 12:08:27.281277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.430 [2024-12-09 12:08:27.281292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.430 [2024-12-09 12:08:27.293378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.430 [2024-12-09 12:08:27.293393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.430 [2024-12-09 12:08:27.304502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.430 [2024-12-09 12:08:27.304517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.430 [2024-12-09 12:08:27.310486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.430 [2024-12-09 12:08:27.310501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.691 [2024-12-09 12:08:27.323865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.691 [2024-12-09 12:08:27.323884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.691 [2024-12-09 12:08:27.337043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.691 [2024-12-09 12:08:27.337057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.691 [2024-12-09 12:08:27.348225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.691 [2024-12-09 12:08:27.348239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.691 [2024-12-09 12:08:27.360955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.691 [2024-12-09 12:08:27.360971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.691 [2024-12-09 12:08:27.372975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.691 [2024-12-09 12:08:27.372990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.691 [2024-12-09 12:08:27.385273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.691 [2024-12-09 12:08:27.385288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.691 [2024-12-09 12:08:27.397533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.691 [2024-12-09 12:08:27.397547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.691 [2024-12-09 12:08:27.408411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.691 [2024-12-09 12:08:27.408425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.691 [2024-12-09 12:08:27.414336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.691 [2024-12-09 12:08:27.414350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.691 [2024-12-09 12:08:27.426952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.691 [2024-12-09 12:08:27.426966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.691 [2024-12-09 12:08:27.439759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.691 [2024-12-09 12:08:27.439773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.691 [2024-12-09 12:08:27.452544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.691 [2024-12-09 12:08:27.452558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.691 [2024-12-09 12:08:27.465233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.691 [2024-12-09 12:08:27.465247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.691 [2024-12-09 12:08:27.477186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.691 [2024-12-09 12:08:27.477200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.691 [2024-12-09 12:08:27.489411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.691 [2024-12-09 12:08:27.489426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.691 [2024-12-09 12:08:27.500474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.691 [2024-12-09 12:08:27.500488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.691 [2024-12-09 12:08:27.506543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.691 [2024-12-09 12:08:27.506556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.691 [2024-12-09 12:08:27.519347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.691 [2024-12-09 12:08:27.519361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.691 [2024-12-09 12:08:27.532549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.691 [2024-12-09 12:08:27.532563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.691 [2024-12-09 12:08:27.545177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.691 [2024-12-09 12:08:27.545195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.691 [2024-12-09 12:08:27.556985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.692 [2024-12-09 12:08:27.556999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.692 [2024-12-09 12:08:27.568810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.692 [2024-12-09 12:08:27.568824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.952 [2024-12-09 12:08:27.583017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.952 [2024-12-09 12:08:27.583032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.952 [2024-12-09 12:08:27.595614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.952 [2024-12-09 12:08:27.595629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.952 [2024-12-09 12:08:27.608671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.952 [2024-12-09 12:08:27.608685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.952 [2024-12-09 12:08:27.620731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.952 [2024-12-09 12:08:27.620745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.952 [2024-12-09 12:08:27.633287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.952 [2024-12-09 12:08:27.633301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.952 [2024-12-09 12:08:27.644397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.953 [2024-12-09 12:08:27.644411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.953 [2024-12-09 12:08:27.650295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.953 [2024-12-09 12:08:27.650309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.953 [2024-12-09 12:08:27.663031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.953 [2024-12-09 12:08:27.663046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.953 [2024-12-09 12:08:27.676136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.953 [2024-12-09 12:08:27.676150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.953 [2024-12-09 12:08:27.689364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.953 [2024-12-09 12:08:27.689378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.953 [2024-12-09 12:08:27.701220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.953 [2024-12-09 12:08:27.701234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.953 [2024-12-09 12:08:27.713630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.953 [2024-12-09 12:08:27.713648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.953 [2024-12-09 12:08:27.724439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.953 [2024-12-09 12:08:27.724454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.953 [2024-12-09 12:08:27.730214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.953 [2024-12-09 12:08:27.730228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.953 [2024-12-09 12:08:27.744192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.953 [2024-12-09 12:08:27.744206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.953 [2024-12-09 12:08:27.756977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.953 [2024-12-09 12:08:27.756991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.953 [2024-12-09 12:08:27.769264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.953 [2024-12-09 12:08:27.769282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.953 [2024-12-09 12:08:27.781320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.953 [2024-12-09 12:08:27.781334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.953 [2024-12-09 12:08:27.793490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.953 [2024-12-09 12:08:27.793505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.953 [2024-12-09 12:08:27.805022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.953 [2024-12-09 12:08:27.805036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.953 [2024-12-09 12:08:27.817462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.953 [2024-12-09 12:08:27.817477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:19.953 [2024-12-09 12:08:27.828761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:19.953 [2024-12-09 12:08:27.828776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.214 [2024-12-09 12:08:27.841370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.214 [2024-12-09 12:08:27.841385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.214 [2024-12-09 12:08:27.852253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.214 [2024-12-09 12:08:27.852268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.214 [2024-12-09 12:08:27.865337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.214 [2024-12-09 12:08:27.865351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.214 [2024-12-09 12:08:27.875896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.214 [2024-12-09 12:08:27.875910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.214 [2024-12-09 12:08:27.889013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.214 [2024-12-09 12:08:27.889026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.214 [2024-12-09 12:08:27.901284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.214 [2024-12-09 12:08:27.901299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.214 [2024-12-09 12:08:27.913571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.214 [2024-12-09 12:08:27.913586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.214 [2024-12-09 12:08:27.924477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.214 [2024-12-09 12:08:27.924492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.214 [2024-12-09 12:08:27.930446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.214 [2024-12-09 12:08:27.930460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.214 [2024-12-09 12:08:27.939753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.214 [2024-12-09 12:08:27.939767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.214 [2024-12-09 12:08:27.952805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.214 [2024-12-09 12:08:27.952819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.214 [2024-12-09 12:08:27.965454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.214 [2024-12-09 12:08:27.965468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.214 [2024-12-09 12:08:27.976137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.214 [2024-12-09 12:08:27.976151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.214 19097.50 IOPS, 149.20 MiB/s [2024-12-09T11:08:28.100Z] [2024-12-09 12:08:27.989061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.214 [2024-12-09 12:08:27.989075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.214 [2024-12-09 12:08:28.001476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.214 [2024-12-09 12:08:28.001490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.214 [2024-12-09 12:08:28.012897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.214 [2024-12-09 12:08:28.012911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.214 [2024-12-09 12:08:28.025375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.214 [2024-12-09 12:08:28.025389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.214 [2024-12-09 12:08:28.037325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.214 [2024-12-09 12:08:28.037340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.214 [2024-12-09 12:08:28.049630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.214 [2024-12-09 12:08:28.049650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.214 [2024-12-09 12:08:28.060327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.214 [2024-12-09 12:08:28.060342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.214 [2024-12-09 12:08:28.066063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.214 [2024-12-09 12:08:28.066077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.214 [2024-12-09 12:08:28.075114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.214 [2024-12-09 12:08:28.075129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.214 [2024-12-09 12:08:28.087939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.214 [2024-12-09 12:08:28.087954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.476 [2024-12-09 12:08:28.100950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.476 [2024-12-09 12:08:28.100965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.476 [2024-12-09 12:08:28.113381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.476 [2024-12-09 12:08:28.113396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.476 [2024-12-09 12:08:28.125909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.476 [2024-12-09 12:08:28.125924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.476 [2024-12-09 12:08:28.136666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.476 [2024-12-09 12:08:28.136681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.476 [2024-12-09 12:08:28.149214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.476 [2024-12-09 12:08:28.149229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.476 [2024-12-09 12:08:28.161432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.476 [2024-12-09 12:08:28.161447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.476 [2024-12-09 12:08:28.172385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.476 [2024-12-09 12:08:28.172399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.476 [2024-12-09 12:08:28.178362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.476 [2024-12-09 12:08:28.178376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.476 [2024-12-09 12:08:28.191212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.476 [2024-12-09 12:08:28.191227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.476 [2024-12-09 12:08:28.204126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.476 [2024-12-09 12:08:28.204141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.476 [2024-12-09 12:08:28.217419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.476 [2024-12-09 12:08:28.217436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.476 [2024-12-09 12:08:28.228328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.476 [2024-12-09 12:08:28.228343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.476 [2024-12-09 12:08:28.234190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.476 [2024-12-09 12:08:28.234204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.476 [2024-12-09 12:08:28.242910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.476 [2024-12-09 12:08:28.242924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.476 [2024-12-09 12:08:28.256094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.476 [2024-12-09 12:08:28.256109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.476 [2024-12-09 12:08:28.268798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.476 [2024-12-09 12:08:28.268812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.476 [2024-12-09 12:08:28.281590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.476 [2024-12-09 12:08:28.281604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.476 [2024-12-09 12:08:28.292444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.476 [2024-12-09 12:08:28.292458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.476 [2024-12-09 12:08:28.298391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.476 [2024-12-09 12:08:28.298406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.476 [2024-12-09 12:08:28.312172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.476 [2024-12-09 12:08:28.312188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.476 [2024-12-09 12:08:28.325342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.476 [2024-12-09 12:08:28.325357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.476 [2024-12-09 12:08:28.337386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.476 [2024-12-09 12:08:28.337401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.476 [2024-12-09 12:08:28.348175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.476 [2024-12-09 12:08:28.348191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.360991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.361005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.373515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.373529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.385275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.385290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.396231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.396246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.402071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.402089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.410926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.410940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.423624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.423643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.436345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.436360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.442363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.442377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.455498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.455514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.468075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.468089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.480896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.480910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.493184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.493198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.505415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.505430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.516159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.516174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.529425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.529440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.540245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.540260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.546166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.546181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.560099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.560114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.572900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.572915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.585484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.585499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.597520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.597534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.608357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.608372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.738 [2024-12-09 12:08:28.614490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:20.738 [2024-12-09 12:08:28.614508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.627121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.627136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.640041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.640056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.653623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.653643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.664386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.664401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.670128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.670142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.679144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.679159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.692290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.692304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.704557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.704571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.717394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.717409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.728280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.728295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.734412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.734426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.748418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.748434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.755118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.755132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.768394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.768409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.774993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.775007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.788168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.788183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.794456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.794470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.807305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.807320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.820616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.820634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.833568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.833582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.844989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.845002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.857501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.857516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.868500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.868514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.000 [2024-12-09 12:08:28.881583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.000 [2024-12-09 12:08:28.881597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.261 [2024-12-09 12:08:28.892600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.261 [2024-12-09 12:08:28.892614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.261 [2024-12-09 12:08:28.905673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.261 [2024-12-09 12:08:28.905688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.261 [2024-12-09 12:08:28.916122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.261 [2024-12-09 12:08:28.916136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.261 [2024-12-09 12:08:28.928912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.261 [2024-12-09 12:08:28.928926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.261 [2024-12-09 12:08:28.941107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.261 [2024-12-09 12:08:28.941121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.261 [2024-12-09 12:08:28.953301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.261 [2024-12-09 12:08:28.953315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.261 [2024-12-09 12:08:28.965394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.261 [2024-12-09 12:08:28.965409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.261 [2024-12-09 12:08:28.976573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.261 [2024-12-09 12:08:28.976587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.261 19108.33 IOPS, 149.28 MiB/s [2024-12-09T11:08:29.147Z] [2024-12-09 12:08:28.989580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.261 [2024-12-09 12:08:28.989594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.261 [2024-12-09 12:08:29.003700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.261 [2024-12-09 12:08:29.003715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.261 [2024-12-09 12:08:29.016583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.261 [2024-12-09 12:08:29.016597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.261 [2024-12-09 12:08:29.029604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.261 [2024-12-09 12:08:29.029619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.261 [2024-12-09 12:08:29.039394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.261 [2024-12-09 12:08:29.039408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.261 [2024-12-09 12:08:29.052395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.261 [2024-12-09 12:08:29.052409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.261 [2024-12-09 12:08:29.058838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.261 [2024-12-09 12:08:29.058852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.261 [2024-12-09 12:08:29.071850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.261 [2024-12-09 12:08:29.071864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.261 [2024-12-09 12:08:29.085129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.261 [2024-12-09 12:08:29.085143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.261 [2024-12-09 12:08:29.097679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.261 [2024-12-09 12:08:29.097694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.261 [2024-12-09 12:08:29.108561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.261 [2024-12-09 12:08:29.108574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.261 [2024-12-09 12:08:29.121522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.261 [2024-12-09 12:08:29.121536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.261 [2024-12-09 12:08:29.132947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.261 [2024-12-09 12:08:29.132961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.261 [2024-12-09 12:08:29.145074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.261 [2024-12-09 12:08:29.145088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.523 [2024-12-09 12:08:29.156518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.523 [2024-12-09 12:08:29.156532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.523 [2024-12-09 12:08:29.169320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.523 [2024-12-09 12:08:29.169334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.523 [2024-12-09 12:08:29.180217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.523 [2024-12-09 12:08:29.180231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.523 [2024-12-09 12:08:29.193303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.523 [2024-12-09 12:08:29.193317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.523 [2024-12-09 12:08:29.205292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.523 [2024-12-09 12:08:29.205307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.523 [2024-12-09 12:08:29.217133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.523 [2024-12-09 12:08:29.217147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.523 [2024-12-09 12:08:29.229307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.523 [2024-12-09 12:08:29.229321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.523 [2024-12-09 12:08:29.240144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.523 [2024-12-09 12:08:29.240159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.523 [2024-12-09 12:08:29.253023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.523 [2024-12-09 12:08:29.253037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.523 [2024-12-09 12:08:29.265216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.523 [2024-12-09 12:08:29.265230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.523 [2024-12-09 12:08:29.277338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.523 [2024-12-09 12:08:29.277352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.523 [2024-12-09 12:08:29.289468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.523 [2024-12-09 12:08:29.289482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.523 [2024-12-09 12:08:29.301771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.523 [2024-12-09 12:08:29.301786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.523 [2024-12-09 12:08:29.311975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.523 [2024-12-09 12:08:29.311989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.523 [2024-12-09 12:08:29.324701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.523 [2024-12-09 12:08:29.324715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.523 [2024-12-09 12:08:29.337273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.523 [2024-12-09 12:08:29.337287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.523 [2024-12-09 12:08:29.349360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.523 [2024-12-09 12:08:29.349374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.523 [2024-12-09 12:08:29.361540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.523 [2024-12-09 12:08:29.361556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.523 [2024-12-09 12:08:29.372976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.523 [2024-12-09 12:08:29.372990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.523 [2024-12-09 12:08:29.384930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.523 [2024-12-09 12:08:29.384945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.523 [2024-12-09 12:08:29.396848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.523 [2024-12-09 12:08:29.396862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.409527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.409541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.420444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.420458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.426394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.426408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.435955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.435969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.448818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.448832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.461651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.461665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.473089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.473103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.485409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.485423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.496405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.496420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.502262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.502276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.511125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.511139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.523840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.523854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.536337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.536352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.542684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.542699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.555934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.555949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.568674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.568688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.581596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.581611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.592080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.592095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.605039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.605054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.616987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.617001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.629663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.629677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.639738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.639753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.652794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.652810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.785 [2024-12-09 12:08:29.665465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.785 [2024-12-09 12:08:29.665479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.046 [2024-12-09 12:08:29.677177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.046 [2024-12-09 12:08:29.677192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.046 [2024-12-09 12:08:29.689598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.046 [2024-12-09 12:08:29.689613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.046 [2024-12-09 12:08:29.700567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.046 [2024-12-09 12:08:29.700581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.046 [2024-12-09 12:08:29.713509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.046 [2024-12-09 12:08:29.713523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.046 [2024-12-09 12:08:29.725184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.046 [2024-12-09 12:08:29.725198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.046 [2024-12-09 12:08:29.737164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.046 [2024-12-09 12:08:29.737178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.046 [2024-12-09 12:08:29.749279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.046 [2024-12-09 12:08:29.749294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.046 [2024-12-09 12:08:29.761411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.046 [2024-12-09 12:08:29.761425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.046 [2024-12-09 12:08:29.773401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.046 [2024-12-09 12:08:29.773415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.046 [2024-12-09 12:08:29.785294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.046 [2024-12-09 12:08:29.785309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.046 [2024-12-09 12:08:29.797111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.046 [2024-12-09 12:08:29.797126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.046 [2024-12-09 12:08:29.809315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.046 [2024-12-09 12:08:29.809330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.046 [2024-12-09 12:08:29.821513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.046 [2024-12-09 12:08:29.821528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.046 [2024-12-09 12:08:29.832239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.046 [2024-12-09 12:08:29.832254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.046 [2024-12-09 12:08:29.838303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.046 [2024-12-09 12:08:29.838318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.046 [2024-12-09 12:08:29.851470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.046 [2024-12-09 12:08:29.851485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.046 [2024-12-09 12:08:29.864218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.046 [2024-12-09 12:08:29.864233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.046 [2024-12-09 12:08:29.877043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.046 [2024-12-09 12:08:29.877058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.046 [2024-12-09 12:08:29.889719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.046 [2024-12-09 12:08:29.889735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.046 [2024-12-09 12:08:29.900975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.046 [2024-12-09 12:08:29.900989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.046 [2024-12-09 12:08:29.913279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.046 [2024-12-09 12:08:29.913293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.046 [2024-12-09 12:08:29.925301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.046 [2024-12-09 12:08:29.925320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.308 [2024-12-09 12:08:29.937012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.308 [2024-12-09 12:08:29.937026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.308 [2024-12-09 12:08:29.949358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.308 [2024-12-09 12:08:29.949373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.308 [2024-12-09 12:08:29.961477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.308 [2024-12-09 12:08:29.961491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.308 [2024-12-09 12:08:29.975653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.309 [2024-12-09 12:08:29.975669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.309 19137.25 IOPS, 149.51 MiB/s [2024-12-09T11:08:30.195Z] [2024-12-09 12:08:29.988618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.309 [2024-12-09 12:08:29.988633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.309 [2024-12-09 12:08:30.001241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.309 [2024-12-09 12:08:30.001256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.309 [2024-12-09 12:08:30.015449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.309 [2024-12-09 12:08:30.015466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.309 [2024-12-09 12:08:30.028488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.309 [2024-12-09 12:08:30.028502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.309 [2024-12-09 12:08:30.041469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.309 [2024-12-09 12:08:30.041484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.309 [2024-12-09 12:08:30.050832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.309 [2024-12-09 12:08:30.050847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.309 [2024-12-09 12:08:30.057138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.309 [2024-12-09 12:08:30.057153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.309 [2024-12-09 12:08:30.067614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.309 [2024-12-09 12:08:30.067629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.309 [2024-12-09 12:08:30.080927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.309 [2024-12-09 12:08:30.080942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.309 [2024-12-09 12:08:30.092774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.309 [2024-12-09 12:08:30.092789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.309 [2024-12-09 12:08:30.105570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.309 [2024-12-09 12:08:30.105585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.309 [2024-12-09 12:08:30.115056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.309 [2024-12-09 12:08:30.115071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.309 [2024-12-09 12:08:30.128156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.309 [2024-12-09 12:08:30.128171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.309 [2024-12-09 12:08:30.140854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.309 [2024-12-09 12:08:30.140869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.309 [2024-12-09 12:08:30.153273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.309 [2024-12-09 12:08:30.153293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.309 [2024-12-09 12:08:30.165387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.309 [2024-12-09 12:08:30.165402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.309 [2024-12-09 12:08:30.177228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.309 [2024-12-09 12:08:30.177242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.309 [2024-12-09 12:08:30.188251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.309 [2024-12-09 12:08:30.188266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.571 [2024-12-09 12:08:30.201507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.571 [2024-12-09 12:08:30.201523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.571 [2024-12-09 12:08:30.212540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.571 [2024-12-09 12:08:30.212554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.571 [2024-12-09 12:08:30.225506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.571 [2024-12-09 12:08:30.225521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.571 [2024-12-09 12:08:30.236671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.571 [2024-12-09 12:08:30.236685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.571 [2024-12-09 12:08:30.249482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.571 [2024-12-09 12:08:30.249497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.571 [2024-12-09 12:08:30.260671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.571 [2024-12-09 12:08:30.260685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.571 [2024-12-09 12:08:30.272969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.571 [2024-12-09 12:08:30.272983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.571 [2024-12-09 12:08:30.285042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.571 [2024-12-09 12:08:30.285056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.571 [2024-12-09 12:08:30.297230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.571 [2024-12-09 12:08:30.297245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.571 [2024-12-09 12:08:30.309455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.571 [2024-12-09 12:08:30.309470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.571 [2024-12-09 12:08:30.320108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.571 [2024-12-09 12:08:30.320123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.571 [2024-12-09 12:08:30.333144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.571 [2024-12-09 12:08:30.333158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.571 [2024-12-09 12:08:30.345290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.571 [2024-12-09 12:08:30.345304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.571 [2024-12-09 12:08:30.357333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.571 [2024-12-09 12:08:30.357348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.571 [2024-12-09 12:08:30.369370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.571 [2024-12-09 12:08:30.369384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.571 [2024-12-09 12:08:30.381571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.571 [2024-12-09 12:08:30.381590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.571 [2024-12-09 12:08:30.392435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.571 [2024-12-09 12:08:30.392449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.571 [2024-12-09 12:08:30.398469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.571 [2024-12-09 12:08:30.398483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.571 [2024-12-09 12:08:30.411334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.571 [2024-12-09 12:08:30.411348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.571 [2024-12-09 12:08:30.424251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.571 [2024-12-09 12:08:30.424266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.571 [2024-12-09 12:08:30.436915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.571 [2024-12-09 12:08:30.436929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.571 [2024-12-09 12:08:30.451322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.571 [2024-12-09 12:08:30.451337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.833 [2024-12-09 12:08:30.464101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.833 [2024-12-09 12:08:30.464117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.833 [2024-12-09 12:08:30.477041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.833 [2024-12-09 12:08:30.477056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.833 [2024-12-09 12:08:30.489307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.833 [2024-12-09 12:08:30.489322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.833 [2024-12-09 12:08:30.501630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.833 [2024-12-09 12:08:30.501650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.833 [2024-12-09 12:08:30.512449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.833 [2024-12-09 12:08:30.512464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.833 [2024-12-09 12:08:30.525318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.833 [2024-12-09 12:08:30.525333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.833 [2024-12-09 12:08:30.537377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.833 [2024-12-09 12:08:30.537391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.833 [2024-12-09 12:08:30.549810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.833 [2024-12-09 12:08:30.549825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.833 [2024-12-09 12:08:30.563882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.833 [2024-12-09 12:08:30.563897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.833 [2024-12-09 12:08:30.576565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.833 [2024-12-09 12:08:30.576579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.833 [2024-12-09 12:08:30.589047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.833 [2024-12-09 12:08:30.589061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.833 [2024-12-09 12:08:30.601245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.833 [2024-12-09 12:08:30.601260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.833 [2024-12-09 12:08:30.612406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.833 [2024-12-09 12:08:30.612421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.833 [2024-12-09 12:08:30.625474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.833 [2024-12-09 12:08:30.625489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.833 [2024-12-09 12:08:30.635709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.833 [2024-12-09 12:08:30.635724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.833 [2024-12-09 12:08:30.648330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.833 [2024-12-09 12:08:30.648345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.833 [2024-12-09 12:08:30.660604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.833 [2024-12-09 12:08:30.660618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.833 [2024-12-09 12:08:30.672464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.833 [2024-12-09 12:08:30.672478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.833 [2024-12-09 12:08:30.684971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.833 [2024-12-09 12:08:30.684986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.833 [2024-12-09 12:08:30.697708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.833 [2024-12-09 12:08:30.697723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.833 [2024-12-09 12:08:30.708093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.833 [2024-12-09 12:08:30.708108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.094 [2024-12-09 12:08:30.720952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.094 [2024-12-09 12:08:30.720967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.094 [2024-12-09 12:08:30.733187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.094 [2024-12-09 12:08:30.733201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.094 [2024-12-09 12:08:30.745381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.094 [2024-12-09 12:08:30.745395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.094 [2024-12-09 12:08:30.757451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.094 [2024-12-09 12:08:30.757465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.094 [2024-12-09 12:08:30.769185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.094 [2024-12-09 12:08:30.769198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.094 [2024-12-09 12:08:30.781771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.094 [2024-12-09 12:08:30.781785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.094 [2024-12-09 12:08:30.791334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.094 [2024-12-09 12:08:30.791348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.094 [2024-12-09 12:08:30.804402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.094 [2024-12-09 12:08:30.804416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.094 [2024-12-09 12:08:30.817104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.094 [2024-12-09 12:08:30.817119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.094 [2024-12-09 12:08:30.829438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.094 [2024-12-09 12:08:30.829453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.094 [2024-12-09 12:08:30.840257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.094 [2024-12-09 12:08:30.840271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.094 [2024-12-09 12:08:30.852846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.094 [2024-12-09 12:08:30.852860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.094 [2024-12-09 12:08:30.865691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.094 [2024-12-09 12:08:30.865705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.094 [2024-12-09 12:08:30.876644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.094 [2024-12-09 12:08:30.876657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.094 [2024-12-09 12:08:30.889304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.094 [2024-12-09 12:08:30.889318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.094 [2024-12-09 12:08:30.900180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.094 [2024-12-09 12:08:30.900195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.095 [2024-12-09 12:08:30.913148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.095 [2024-12-09 12:08:30.913161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.095 [2024-12-09 12:08:30.925357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.095 [2024-12-09 12:08:30.925372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.095 [2024-12-09 12:08:30.937353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.095 [2024-12-09 12:08:30.937367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.095 [2024-12-09 12:08:30.949507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.095 [2024-12-09 12:08:30.949521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.095 [2024-12-09 12:08:30.963611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.095 [2024-12-09 12:08:30.963625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.095 [2024-12-09 12:08:30.976630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.095 [2024-12-09 12:08:30.976649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.356 19125.20 IOPS, 149.42 MiB/s 00:33:23.356 Latency(us) 00:33:23.356 [2024-12-09T11:08:31.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:23.356 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:23.356 Nvme1n1 : 5.01 19129.87 149.45 0.00 0.00 6685.46 2512.21 12069.55 00:33:23.356 [2024-12-09T11:08:31.242Z] =================================================================================================================== 00:33:23.356 [2024-12-09T11:08:31.242Z] Total : 19129.87 149.45 0.00 0.00 6685.46 2512.21 12069.55 00:33:23.356 [2024-12-09 12:08:30.992249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.356 [2024-12-09 12:08:30.992263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.356 [2024-12-09 12:08:31.000243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.356 [2024-12-09 12:08:31.000254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.356 [2024-12-09 12:08:31.008245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.356 [2024-12-09 12:08:31.008258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.356 [2024-12-09 12:08:31.016247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.356 [2024-12-09 12:08:31.016264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.356 [2024-12-09 12:08:31.024245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.356 [2024-12-09 12:08:31.024255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.356 [2024-12-09 12:08:31.032244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.356 [2024-12-09 12:08:31.032254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.356 [2024-12-09 12:08:31.040244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.356 [2024-12-09 12:08:31.040253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.356 [2024-12-09 12:08:31.048241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.356 [2024-12-09 12:08:31.048250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.356 [2024-12-09 12:08:31.056241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.356 [2024-12-09 12:08:31.056250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.356 [2024-12-09 12:08:31.064242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.356 [2024-12-09 12:08:31.064249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.356 [2024-12-09 12:08:31.072243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.356 [2024-12-09 12:08:31.072251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.356 [2024-12-09 12:08:31.080242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.356 [2024-12-09 12:08:31.080251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.356 [2024-12-09 12:08:31.088240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.356 [2024-12-09 12:08:31.088248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (304337) - No such process 00:33:23.356 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 304337 00:33:23.356 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:23.356 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.356 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:23.356 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.356 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:23.356 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.356 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:23.356 delay0 00:33:23.356 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.356 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:23.356 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.356 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:23.356 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.356 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:23.617 [2024-12-09 12:08:31.251239] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:30.206 Initializing NVMe Controllers 00:33:30.206 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:30.206 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:30.206 Initialization complete. Launching workers. 00:33:30.206 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1295 00:33:30.206 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1582, failed to submit 33 00:33:30.206 success 1432, unsuccessful 150, failed 0 00:33:30.206 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:30.206 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:30.206 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:30.206 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@122 -- # sync 00:33:30.206 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:33:30.206 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # set +e 00:33:30.206 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # for i in {1..20} 00:33:30.206 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:33:30.206 rmmod nvme_tcp 00:33:30.206 rmmod nvme_fabrics 00:33:30.206 rmmod nvme_keyring 00:33:30.206 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:33:30.206 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # set -e 00:33:30.206 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@130 -- # return 0 00:33:30.206 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 302151 ']' 00:33:30.206 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 302151 00:33:30.206 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 302151 ']' 00:33:30.206 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 302151 00:33:30.206 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:33:30.206 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:30.206 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 302151 00:33:30.206 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:30.206 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:30.206 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 302151' 00:33:30.206 killing process with pid 302151 00:33:30.207 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 302151 00:33:30.207 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 302151 00:33:30.207 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:30.207 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:30.207 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:30.207 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # iptr 00:33:30.207 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:33:30.207 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:30.207 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:33:30.207 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:30.207 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # remove_spdk_ns 00:33:30.207 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:30.207 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:30.207 12:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:32.121 12:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:33:32.121 00:33:32.121 real 0m33.661s 00:33:32.121 user 0m42.810s 00:33:32.121 sys 0m12.563s 00:33:32.121 12:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:32.121 12:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:32.121 ************************************ 00:33:32.121 END TEST nvmf_zcopy 00:33:32.121 ************************************ 00:33:32.383 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:32.383 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:32.383 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:32.383 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:32.383 ************************************ 00:33:32.383 START TEST nvmf_nmic 00:33:32.383 ************************************ 00:33:32.383 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:32.383 * Looking for test storage... 00:33:32.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:32.383 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:32.383 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:33:32.383 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:32.383 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:32.383 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:32.383 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:32.383 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:32.383 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:32.383 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:32.383 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:32.383 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:32.383 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:32.383 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:32.383 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:32.383 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:32.383 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:32.383 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:32.383 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:32.384 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:32.384 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:32.384 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:32.384 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:32.384 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:32.384 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:32.384 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:32.384 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:32.384 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:32.384 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:32.384 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:32.384 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:32.384 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:32.384 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:32.384 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:32.384 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:32.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.384 --rc genhtml_branch_coverage=1 00:33:32.384 --rc genhtml_function_coverage=1 00:33:32.384 --rc genhtml_legend=1 00:33:32.384 --rc geninfo_all_blocks=1 00:33:32.384 --rc geninfo_unexecuted_blocks=1 00:33:32.384 00:33:32.384 ' 00:33:32.384 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:32.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.384 --rc genhtml_branch_coverage=1 00:33:32.384 --rc genhtml_function_coverage=1 00:33:32.384 --rc genhtml_legend=1 00:33:32.384 --rc geninfo_all_blocks=1 00:33:32.384 --rc geninfo_unexecuted_blocks=1 00:33:32.384 00:33:32.384 ' 00:33:32.384 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:32.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.384 --rc genhtml_branch_coverage=1 00:33:32.384 --rc genhtml_function_coverage=1 00:33:32.384 --rc genhtml_legend=1 00:33:32.384 --rc geninfo_all_blocks=1 00:33:32.384 --rc geninfo_unexecuted_blocks=1 00:33:32.384 00:33:32.384 ' 00:33:32.384 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:32.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.384 --rc genhtml_branch_coverage=1 00:33:32.384 --rc genhtml_function_coverage=1 00:33:32.384 --rc genhtml_legend=1 00:33:32.384 --rc geninfo_all_blocks=1 00:33:32.384 --rc geninfo_unexecuted_blocks=1 00:33:32.384 00:33:32.384 ' 00:33:32.384 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:32.384 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # : 0 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # '[' 1 -eq 1 ']' 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@35 -- # NVMF_APP+=(--interrupt-mode) 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@56 -- # have_pci_nics=0 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@310 -- # xtrace_disable 00:33:32.646 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_devs=() 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_devs 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_net_devs=() 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@318 -- # pci_drivers=() 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@318 -- # local -A pci_drivers 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # net_devs=() 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga net_devs 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # e810=() 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga e810 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # x722=() 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga x722 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@323 -- # mlx=() 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@323 -- # local -ga mlx 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:39.244 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:39.244 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:39.244 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:39.244 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:39.245 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.245 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:39.245 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:39.245 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.245 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:39.245 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:33:39.245 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:39.245 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:39.245 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:39.245 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:39.245 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:39.245 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:39.245 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:39.245 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:33:39.245 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:39.245 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:39.245 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:33:39.245 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:33:39.245 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:39.245 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:39.245 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:33:39.245 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:33:39.245 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:33:39.245 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:39.505 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:39.505 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:39.505 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:33:39.505 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:39.505 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:39.505 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:39.505 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:39.505 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:33:39.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:39.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:33:39.505 00:33:39.505 --- 10.0.0.2 ping statistics --- 00:33:39.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.505 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:33:39.505 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:39.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:39.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:33:39.767 00:33:39.767 --- 10.0.0.1 ping statistics --- 00:33:39.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.767 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:33:39.767 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:39.767 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:33:39.767 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:39.767 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:39.767 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:39.767 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:39.767 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:39.767 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:39.767 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:39.767 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:39.767 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:39.767 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:39.767 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:39.767 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=310673 00:33:39.767 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 310673 00:33:39.767 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:39.767 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 310673 ']' 00:33:39.767 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:39.767 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:39.767 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:39.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:39.767 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:39.767 12:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:39.767 [2024-12-09 12:08:47.520819] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:39.767 [2024-12-09 12:08:47.522044] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:33:39.767 [2024-12-09 12:08:47.522095] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:39.767 [2024-12-09 12:08:47.624644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:40.028 [2024-12-09 12:08:47.678692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:40.028 [2024-12-09 12:08:47.678751] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:40.028 [2024-12-09 12:08:47.678760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:40.028 [2024-12-09 12:08:47.678767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:40.028 [2024-12-09 12:08:47.678774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:40.028 [2024-12-09 12:08:47.680703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:40.028 [2024-12-09 12:08:47.680927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:40.028 [2024-12-09 12:08:47.681248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:40.028 [2024-12-09 12:08:47.681249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.028 [2024-12-09 12:08:47.759873] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:40.028 [2024-12-09 12:08:47.759886] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:40.028 [2024-12-09 12:08:47.761009] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:40.028 [2024-12-09 12:08:47.761052] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:40.028 [2024-12-09 12:08:47.761200] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.602 [2024-12-09 12:08:48.394300] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.602 Malloc0 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.602 [2024-12-09 12:08:48.482450] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:40.602 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.865 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:33:40.865 test case1: single bdev can't be used in multiple subsystems 00:33:40.865 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:40.865 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.865 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.865 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.865 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:40.865 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.865 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.865 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.865 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:33:40.865 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:33:40.865 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.865 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.865 [2024-12-09 12:08:48.517910] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:33:40.865 [2024-12-09 12:08:48.517934] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:33:40.865 [2024-12-09 12:08:48.517942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.865 request: 00:33:40.865 { 00:33:40.865 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:40.865 "namespace": { 00:33:40.865 "bdev_name": "Malloc0", 00:33:40.865 "no_auto_visible": false, 00:33:40.865 "hide_metadata": false 00:33:40.865 }, 00:33:40.865 "method": "nvmf_subsystem_add_ns", 00:33:40.865 "req_id": 1 00:33:40.865 } 00:33:40.865 Got JSON-RPC error response 00:33:40.865 response: 00:33:40.865 { 00:33:40.865 "code": -32602, 00:33:40.865 "message": "Invalid parameters" 00:33:40.865 } 00:33:40.865 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:40.865 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:33:40.865 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:33:40.865 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:33:40.865 Adding namespace failed - expected result. 00:33:40.865 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:33:40.865 test case2: host connect to nvmf target in multiple paths 00:33:40.865 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:40.865 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.865 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.865 [2024-12-09 12:08:48.530014] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:40.865 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.865 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:41.126 12:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:33:41.697 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:33:41.697 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:33:41.697 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:41.697 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:41.697 12:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:33:43.609 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:43.609 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:43.609 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:43.609 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:43.609 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:43.609 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:33:43.609 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:43.609 [global] 00:33:43.609 thread=1 00:33:43.609 invalidate=1 00:33:43.609 rw=write 00:33:43.609 time_based=1 00:33:43.609 runtime=1 00:33:43.609 ioengine=libaio 00:33:43.609 direct=1 00:33:43.609 bs=4096 00:33:43.609 iodepth=1 00:33:43.609 norandommap=0 00:33:43.609 numjobs=1 00:33:43.609 00:33:43.609 verify_dump=1 00:33:43.609 verify_backlog=512 00:33:43.609 verify_state_save=0 00:33:43.609 do_verify=1 00:33:43.609 verify=crc32c-intel 00:33:43.609 [job0] 00:33:43.609 filename=/dev/nvme0n1 00:33:43.609 Could not set queue depth (nvme0n1) 00:33:43.868 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:43.868 fio-3.35 00:33:43.868 Starting 1 thread 00:33:45.255 00:33:45.255 job0: (groupid=0, jobs=1): err= 0: pid=311610: Mon Dec 9 12:08:52 2024 00:33:45.255 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:45.255 slat (nsec): min=9709, max=62759, avg=27385.47, stdev=3177.04 00:33:45.255 clat (usec): min=599, max=1250, avg=1070.68, stdev=75.38 00:33:45.255 lat (usec): min=626, max=1276, avg=1098.06, stdev=75.32 00:33:45.255 clat percentiles (usec): 00:33:45.255 | 1.00th=[ 873], 5.00th=[ 938], 10.00th=[ 988], 20.00th=[ 1020], 00:33:45.255 | 30.00th=[ 1045], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:33:45.255 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1188], 00:33:45.255 | 99.00th=[ 1221], 99.50th=[ 1221], 99.90th=[ 1254], 99.95th=[ 1254], 00:33:45.255 | 99.99th=[ 1254] 00:33:45.255 write: IOPS=656, BW=2625KiB/s (2688kB/s)(2628KiB/1001msec); 0 zone resets 00:33:45.255 slat (usec): min=9, max=25684, avg=69.51, stdev=1000.92 00:33:45.255 clat (usec): min=264, max=857, avg=579.12, stdev=102.37 00:33:45.255 lat (usec): min=273, max=26412, avg=648.63, stdev=1012.38 00:33:45.255 clat percentiles (usec): 00:33:45.255 | 1.00th=[ 338], 5.00th=[ 396], 10.00th=[ 433], 20.00th=[ 498], 00:33:45.255 | 30.00th=[ 537], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 603], 00:33:45.255 | 70.00th=[ 635], 80.00th=[ 668], 90.00th=[ 709], 95.00th=[ 734], 00:33:45.255 | 99.00th=[ 799], 99.50th=[ 816], 99.90th=[ 857], 99.95th=[ 857], 00:33:45.255 | 99.99th=[ 857] 00:33:45.255 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:33:45.255 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:45.255 lat (usec) : 500=11.55%, 750=42.94%, 1000=7.96% 00:33:45.255 lat (msec) : 2=37.55% 00:33:45.255 cpu : usr=2.50%, sys=4.40%, ctx=1172, majf=0, minf=1 00:33:45.255 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:45.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:45.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:45.255 issued rwts: total=512,657,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:45.255 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:45.255 00:33:45.255 Run status group 0 (all jobs): 00:33:45.255 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:33:45.255 WRITE: bw=2625KiB/s (2688kB/s), 2625KiB/s-2625KiB/s (2688kB/s-2688kB/s), io=2628KiB (2691kB), run=1001-1001msec 00:33:45.255 00:33:45.255 Disk stats (read/write): 00:33:45.255 nvme0n1: ios=526/512, merge=0/0, ticks=1475/229, in_queue=1704, util=98.30% 00:33:45.255 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:45.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@122 -- # sync 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # set +e 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # for i in {1..20} 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:33:45.255 rmmod nvme_tcp 00:33:45.255 rmmod nvme_fabrics 00:33:45.255 rmmod nvme_keyring 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # set -e 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@130 -- # return 0 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 310673 ']' 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 310673 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 310673 ']' 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 310673 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:45.255 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 310673 00:33:45.516 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:45.516 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:45.516 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 310673' 00:33:45.516 killing process with pid 310673 00:33:45.516 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 310673 00:33:45.516 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 310673 00:33:45.516 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:45.516 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:45.516 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:45.516 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # iptr 00:33:45.516 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:33:45.516 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:45.516 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:33:45.516 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:45.516 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # remove_spdk_ns 00:33:45.516 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.516 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:45.516 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:48.068 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:33:48.068 00:33:48.068 real 0m15.326s 00:33:48.068 user 0m38.724s 00:33:48.068 sys 0m7.352s 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:48.069 ************************************ 00:33:48.069 END TEST nvmf_nmic 00:33:48.069 ************************************ 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:48.069 ************************************ 00:33:48.069 START TEST nvmf_fio_target 00:33:48.069 ************************************ 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:48.069 * Looking for test storage... 00:33:48.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:48.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.069 --rc genhtml_branch_coverage=1 00:33:48.069 --rc genhtml_function_coverage=1 00:33:48.069 --rc genhtml_legend=1 00:33:48.069 --rc geninfo_all_blocks=1 00:33:48.069 --rc geninfo_unexecuted_blocks=1 00:33:48.069 00:33:48.069 ' 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:48.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.069 --rc genhtml_branch_coverage=1 00:33:48.069 --rc genhtml_function_coverage=1 00:33:48.069 --rc genhtml_legend=1 00:33:48.069 --rc geninfo_all_blocks=1 00:33:48.069 --rc geninfo_unexecuted_blocks=1 00:33:48.069 00:33:48.069 ' 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:48.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.069 --rc genhtml_branch_coverage=1 00:33:48.069 --rc genhtml_function_coverage=1 00:33:48.069 --rc genhtml_legend=1 00:33:48.069 --rc geninfo_all_blocks=1 00:33:48.069 --rc geninfo_unexecuted_blocks=1 00:33:48.069 00:33:48.069 ' 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:48.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.069 --rc genhtml_branch_coverage=1 00:33:48.069 --rc genhtml_function_coverage=1 00:33:48.069 --rc genhtml_legend=1 00:33:48.069 --rc geninfo_all_blocks=1 00:33:48.069 --rc geninfo_unexecuted_blocks=1 00:33:48.069 00:33:48.069 ' 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.069 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # : 0 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # '[' 1 -eq 1 ']' 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@35 -- # NVMF_APP+=(--interrupt-mode) 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@56 -- # have_pci_nics=0 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@310 -- # xtrace_disable 00:33:48.070 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_devs=() 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_devs 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_net_devs=() 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@318 -- # pci_drivers=() 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@318 -- # local -A pci_drivers 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # net_devs=() 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga net_devs 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # e810=() 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga e810 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # x722=() 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga x722 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@323 -- # mlx=() 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@323 -- # local -ga mlx 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:54.663 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:33:54.663 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:54.664 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:54.664 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:54.664 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:33:54.664 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:54.925 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:54.925 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:54.925 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:33:54.925 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:54.925 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:54.925 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:54.925 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:54.925 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:33:54.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:54.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:33:54.925 00:33:54.925 --- 10.0.0.2 ping statistics --- 00:33:54.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.925 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:33:54.925 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:54.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:54.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:33:54.925 00:33:54.925 --- 10.0.0.1 ping statistics --- 00:33:54.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.925 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:33:54.925 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:54.925 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:33:54.925 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:54.925 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:54.925 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:54.925 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:54.925 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:54.925 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:54.925 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:54.925 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:33:54.925 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:54.925 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:54.925 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:55.187 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=316063 00:33:55.187 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 316063 00:33:55.187 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:55.187 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 316063 ']' 00:33:55.187 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:55.187 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:55.187 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:55.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:55.187 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:55.187 12:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:55.187 [2024-12-09 12:09:02.872228] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:55.187 [2024-12-09 12:09:02.873382] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:33:55.187 [2024-12-09 12:09:02.873435] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:55.187 [2024-12-09 12:09:02.971507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:55.187 [2024-12-09 12:09:03.025193] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:55.187 [2024-12-09 12:09:03.025249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:55.187 [2024-12-09 12:09:03.025258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:55.187 [2024-12-09 12:09:03.025272] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:55.187 [2024-12-09 12:09:03.025278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:55.187 [2024-12-09 12:09:03.027676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:55.187 [2024-12-09 12:09:03.027954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:55.187 [2024-12-09 12:09:03.027956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:55.187 [2024-12-09 12:09:03.027777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:55.449 [2024-12-09 12:09:03.106599] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:55.449 [2024-12-09 12:09:03.106649] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:55.449 [2024-12-09 12:09:03.107413] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:55.449 [2024-12-09 12:09:03.107627] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:55.449 [2024-12-09 12:09:03.107800] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:56.022 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:56.022 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:33:56.022 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:56.022 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:56.022 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:56.022 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:56.022 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:56.022 [2024-12-09 12:09:03.897034] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:56.283 12:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:56.283 12:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:33:56.283 12:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:56.545 12:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:33:56.545 12:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:56.806 12:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:33:56.806 12:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:57.068 12:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:33:57.068 12:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:33:57.330 12:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:57.330 12:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:33:57.330 12:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:57.591 12:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:33:57.591 12:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:57.852 12:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:33:57.852 12:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:33:58.113 12:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:58.113 12:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:58.113 12:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:58.374 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:58.374 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:58.635 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:58.635 [2024-12-09 12:09:06.472784] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:58.635 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:33:58.897 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:33:59.158 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:59.418 12:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:33:59.418 12:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:33:59.418 12:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:59.418 12:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:33:59.418 12:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:33:59.418 12:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:34:01.964 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:01.964 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:01.964 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:01.964 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:34:01.964 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:01.964 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:34:01.964 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:01.964 [global] 00:34:01.964 thread=1 00:34:01.964 invalidate=1 00:34:01.964 rw=write 00:34:01.964 time_based=1 00:34:01.964 runtime=1 00:34:01.964 ioengine=libaio 00:34:01.964 direct=1 00:34:01.964 bs=4096 00:34:01.964 iodepth=1 00:34:01.964 norandommap=0 00:34:01.964 numjobs=1 00:34:01.964 00:34:01.964 verify_dump=1 00:34:01.964 verify_backlog=512 00:34:01.964 verify_state_save=0 00:34:01.964 do_verify=1 00:34:01.964 verify=crc32c-intel 00:34:01.964 [job0] 00:34:01.964 filename=/dev/nvme0n1 00:34:01.964 [job1] 00:34:01.964 filename=/dev/nvme0n2 00:34:01.964 [job2] 00:34:01.964 filename=/dev/nvme0n3 00:34:01.964 [job3] 00:34:01.964 filename=/dev/nvme0n4 00:34:01.964 Could not set queue depth (nvme0n1) 00:34:01.964 Could not set queue depth (nvme0n2) 00:34:01.964 Could not set queue depth (nvme0n3) 00:34:01.964 Could not set queue depth (nvme0n4) 00:34:01.964 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:01.964 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:01.965 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:01.965 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:01.965 fio-3.35 00:34:01.965 Starting 4 threads 00:34:03.383 00:34:03.383 job0: (groupid=0, jobs=1): err= 0: pid=317655: Mon Dec 9 12:09:10 2024 00:34:03.383 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:03.383 slat (nsec): min=27405, max=48786, avg=28705.56, stdev=2645.54 00:34:03.383 clat (usec): min=415, max=1436, avg=1072.74, stdev=123.62 00:34:03.383 lat (usec): min=444, max=1465, avg=1101.45, stdev=123.48 00:34:03.383 clat percentiles (usec): 00:34:03.383 | 1.00th=[ 709], 5.00th=[ 865], 10.00th=[ 922], 20.00th=[ 971], 00:34:03.383 | 30.00th=[ 1020], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1106], 00:34:03.383 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[ 1254], 00:34:03.383 | 99.00th=[ 1319], 99.50th=[ 1336], 99.90th=[ 1434], 99.95th=[ 1434], 00:34:03.383 | 99.99th=[ 1434] 00:34:03.383 write: IOPS=616, BW=2466KiB/s (2525kB/s)(2468KiB/1001msec); 0 zone resets 00:34:03.383 slat (nsec): min=9604, max=61088, avg=33263.88, stdev=10620.82 00:34:03.383 clat (usec): min=179, max=1302, avg=653.33, stdev=169.60 00:34:03.383 lat (usec): min=191, max=1338, avg=686.59, stdev=173.59 00:34:03.383 clat percentiles (usec): 00:34:03.383 | 1.00th=[ 231], 5.00th=[ 359], 10.00th=[ 424], 20.00th=[ 506], 00:34:03.383 | 30.00th=[ 570], 40.00th=[ 611], 50.00th=[ 668], 60.00th=[ 709], 00:34:03.383 | 70.00th=[ 742], 80.00th=[ 799], 90.00th=[ 865], 95.00th=[ 914], 00:34:03.383 | 99.00th=[ 1004], 99.50th=[ 1029], 99.90th=[ 1303], 99.95th=[ 1303], 00:34:03.383 | 99.99th=[ 1303] 00:34:03.383 bw ( KiB/s): min= 4096, max= 4096, per=44.81%, avg=4096.00, stdev= 0.00, samples=1 00:34:03.383 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:03.383 lat (usec) : 250=0.71%, 500=10.01%, 750=28.88%, 1000=25.78% 00:34:03.383 lat (msec) : 2=34.63% 00:34:03.383 cpu : usr=2.90%, sys=4.20%, ctx=1131, majf=0, minf=1 00:34:03.383 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.383 issued rwts: total=512,617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.383 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:03.383 job1: (groupid=0, jobs=1): err= 0: pid=317666: Mon Dec 9 12:09:10 2024 00:34:03.383 read: IOPS=17, BW=69.6KiB/s (71.2kB/s)(72.0KiB/1035msec) 00:34:03.383 slat (nsec): min=25134, max=29131, avg=25779.44, stdev=885.89 00:34:03.383 clat (usec): min=1104, max=42065, avg=39613.13, stdev=9612.99 00:34:03.383 lat (usec): min=1133, max=42091, avg=39638.91, stdev=9612.15 00:34:03.383 clat percentiles (usec): 00:34:03.383 | 1.00th=[ 1106], 5.00th=[ 1106], 10.00th=[41157], 20.00th=[41681], 00:34:03.383 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:03.383 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:03.383 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:03.383 | 99.99th=[42206] 00:34:03.383 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:34:03.383 slat (nsec): min=9852, max=71630, avg=31954.19, stdev=6882.92 00:34:03.383 clat (usec): min=168, max=3531, avg=587.93, stdev=213.26 00:34:03.383 lat (usec): min=201, max=3582, avg=619.88, stdev=214.27 00:34:03.383 clat percentiles (usec): 00:34:03.383 | 1.00th=[ 260], 5.00th=[ 326], 10.00th=[ 363], 20.00th=[ 433], 00:34:03.383 | 30.00th=[ 494], 40.00th=[ 529], 50.00th=[ 578], 60.00th=[ 627], 00:34:03.383 | 70.00th=[ 660], 80.00th=[ 725], 90.00th=[ 807], 95.00th=[ 881], 00:34:03.383 | 99.00th=[ 1057], 99.50th=[ 1090], 99.90th=[ 3523], 99.95th=[ 3523], 00:34:03.383 | 99.99th=[ 3523] 00:34:03.383 bw ( KiB/s): min= 4096, max= 4096, per=44.81%, avg=4096.00, stdev= 0.00, samples=1 00:34:03.383 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:03.383 lat (usec) : 250=0.57%, 500=30.38%, 750=50.19%, 1000=13.96% 00:34:03.383 lat (msec) : 2=1.51%, 4=0.19%, 50=3.21% 00:34:03.383 cpu : usr=0.68%, sys=1.64%, ctx=530, majf=0, minf=2 00:34:03.383 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.383 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.384 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:03.384 job2: (groupid=0, jobs=1): err= 0: pid=317682: Mon Dec 9 12:09:10 2024 00:34:03.384 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:03.384 slat (nsec): min=7818, max=60742, avg=27620.86, stdev=3851.79 00:34:03.384 clat (usec): min=512, max=1450, avg=1050.77, stdev=138.88 00:34:03.384 lat (usec): min=539, max=1476, avg=1078.39, stdev=138.99 00:34:03.384 clat percentiles (usec): 00:34:03.384 | 1.00th=[ 685], 5.00th=[ 807], 10.00th=[ 857], 20.00th=[ 938], 00:34:03.384 | 30.00th=[ 996], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1090], 00:34:03.384 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1205], 95.00th=[ 1270], 00:34:03.384 | 99.00th=[ 1369], 99.50th=[ 1418], 99.90th=[ 1450], 99.95th=[ 1450], 00:34:03.384 | 99.99th=[ 1450] 00:34:03.384 write: IOPS=664, BW=2657KiB/s (2721kB/s)(2660KiB/1001msec); 0 zone resets 00:34:03.384 slat (nsec): min=9645, max=66793, avg=32347.34, stdev=8084.41 00:34:03.384 clat (usec): min=294, max=1890, avg=627.59, stdev=144.73 00:34:03.384 lat (usec): min=306, max=1925, avg=659.94, stdev=146.59 00:34:03.384 clat percentiles (usec): 00:34:03.384 | 1.00th=[ 318], 5.00th=[ 424], 10.00th=[ 465], 20.00th=[ 515], 00:34:03.384 | 30.00th=[ 553], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:34:03.384 | 70.00th=[ 685], 80.00th=[ 734], 90.00th=[ 791], 95.00th=[ 848], 00:34:03.384 | 99.00th=[ 1029], 99.50th=[ 1123], 99.90th=[ 1893], 99.95th=[ 1893], 00:34:03.384 | 99.99th=[ 1893] 00:34:03.384 bw ( KiB/s): min= 4096, max= 4096, per=44.81%, avg=4096.00, stdev= 0.00, samples=1 00:34:03.384 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:03.384 lat (usec) : 500=10.20%, 750=38.15%, 1000=21.84% 00:34:03.384 lat (msec) : 2=29.82% 00:34:03.384 cpu : usr=3.20%, sys=3.40%, ctx=1177, majf=0, minf=1 00:34:03.384 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.384 issued rwts: total=512,665,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.384 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:03.384 job3: (groupid=0, jobs=1): err= 0: pid=317688: Mon Dec 9 12:09:10 2024 00:34:03.384 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:03.384 slat (nsec): min=7957, max=65078, avg=27001.60, stdev=2866.49 00:34:03.384 clat (usec): min=487, max=41878, avg=1236.26, stdev=3090.03 00:34:03.384 lat (usec): min=513, max=41905, avg=1263.26, stdev=3090.03 00:34:03.384 clat percentiles (usec): 00:34:03.384 | 1.00th=[ 709], 5.00th=[ 816], 10.00th=[ 857], 20.00th=[ 906], 00:34:03.384 | 30.00th=[ 938], 40.00th=[ 971], 50.00th=[ 996], 60.00th=[ 1029], 00:34:03.384 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1156], 95.00th=[ 1221], 00:34:03.384 | 99.00th=[ 1319], 99.50th=[40633], 99.90th=[41681], 99.95th=[41681], 00:34:03.384 | 99.99th=[41681] 00:34:03.384 write: IOPS=570, BW=2282KiB/s (2336kB/s)(2284KiB/1001msec); 0 zone resets 00:34:03.384 slat (nsec): min=9372, max=69268, avg=31167.92, stdev=9065.13 00:34:03.384 clat (usec): min=158, max=1841, avg=573.00, stdev=167.61 00:34:03.384 lat (usec): min=169, max=1851, avg=604.17, stdev=170.51 00:34:03.384 clat percentiles (usec): 00:34:03.384 | 1.00th=[ 182], 5.00th=[ 306], 10.00th=[ 367], 20.00th=[ 437], 00:34:03.384 | 30.00th=[ 482], 40.00th=[ 529], 50.00th=[ 570], 60.00th=[ 619], 00:34:03.384 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 758], 95.00th=[ 824], 00:34:03.384 | 99.00th=[ 947], 99.50th=[ 1012], 99.90th=[ 1844], 99.95th=[ 1844], 00:34:03.384 | 99.99th=[ 1844] 00:34:03.384 bw ( KiB/s): min= 4096, max= 4096, per=44.81%, avg=4096.00, stdev= 0.00, samples=1 00:34:03.384 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:03.384 lat (usec) : 250=1.20%, 500=16.71%, 750=30.10%, 1000=28.35% 00:34:03.384 lat (msec) : 2=23.36%, 50=0.28% 00:34:03.384 cpu : usr=2.70%, sys=3.80%, ctx=1083, majf=0, minf=1 00:34:03.384 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.384 issued rwts: total=512,571,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.384 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:03.384 00:34:03.384 Run status group 0 (all jobs): 00:34:03.384 READ: bw=6006KiB/s (6150kB/s), 69.6KiB/s-2046KiB/s (71.2kB/s-2095kB/s), io=6216KiB (6365kB), run=1001-1035msec 00:34:03.384 WRITE: bw=9140KiB/s (9359kB/s), 1979KiB/s-2657KiB/s (2026kB/s-2721kB/s), io=9460KiB (9687kB), run=1001-1035msec 00:34:03.384 00:34:03.384 Disk stats (read/write): 00:34:03.384 nvme0n1: ios=480/512, merge=0/0, ticks=1002/246, in_queue=1248, util=99.40% 00:34:03.384 nvme0n2: ios=37/512, merge=0/0, ticks=536/286, in_queue=822, util=85.99% 00:34:03.384 nvme0n3: ios=461/512, merge=0/0, ticks=444/290, in_queue=734, util=88.34% 00:34:03.384 nvme0n4: ios=399/512, merge=0/0, ticks=956/229, in_queue=1185, util=91.42% 00:34:03.384 12:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:03.384 [global] 00:34:03.384 thread=1 00:34:03.384 invalidate=1 00:34:03.384 rw=randwrite 00:34:03.384 time_based=1 00:34:03.384 runtime=1 00:34:03.384 ioengine=libaio 00:34:03.384 direct=1 00:34:03.384 bs=4096 00:34:03.384 iodepth=1 00:34:03.384 norandommap=0 00:34:03.384 numjobs=1 00:34:03.384 00:34:03.384 verify_dump=1 00:34:03.384 verify_backlog=512 00:34:03.384 verify_state_save=0 00:34:03.384 do_verify=1 00:34:03.384 verify=crc32c-intel 00:34:03.384 [job0] 00:34:03.384 filename=/dev/nvme0n1 00:34:03.384 [job1] 00:34:03.384 filename=/dev/nvme0n2 00:34:03.384 [job2] 00:34:03.384 filename=/dev/nvme0n3 00:34:03.384 [job3] 00:34:03.384 filename=/dev/nvme0n4 00:34:03.384 Could not set queue depth (nvme0n1) 00:34:03.384 Could not set queue depth (nvme0n2) 00:34:03.384 Could not set queue depth (nvme0n3) 00:34:03.384 Could not set queue depth (nvme0n4) 00:34:03.650 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:03.650 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:03.650 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:03.650 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:03.650 fio-3.35 00:34:03.650 Starting 4 threads 00:34:05.038 00:34:05.038 job0: (groupid=0, jobs=1): err= 0: pid=318462: Mon Dec 9 12:09:12 2024 00:34:05.038 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:05.038 slat (nsec): min=7847, max=64924, avg=25343.01, stdev=3666.42 00:34:05.038 clat (usec): min=584, max=1260, avg=994.26, stdev=111.16 00:34:05.038 lat (usec): min=610, max=1285, avg=1019.60, stdev=111.31 00:34:05.038 clat percentiles (usec): 00:34:05.038 | 1.00th=[ 652], 5.00th=[ 799], 10.00th=[ 848], 20.00th=[ 922], 00:34:05.038 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 1004], 60.00th=[ 1029], 00:34:05.038 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1172], 00:34:05.038 | 99.00th=[ 1205], 99.50th=[ 1237], 99.90th=[ 1254], 99.95th=[ 1254], 00:34:05.038 | 99.99th=[ 1254] 00:34:05.038 write: IOPS=718, BW=2873KiB/s (2942kB/s)(2876KiB/1001msec); 0 zone resets 00:34:05.038 slat (nsec): min=9110, max=78587, avg=30320.95, stdev=7405.98 00:34:05.038 clat (usec): min=227, max=1158, avg=620.30, stdev=172.31 00:34:05.038 lat (usec): min=258, max=1190, avg=650.62, stdev=173.89 00:34:05.038 clat percentiles (usec): 00:34:05.038 | 1.00th=[ 289], 5.00th=[ 359], 10.00th=[ 408], 20.00th=[ 461], 00:34:05.038 | 30.00th=[ 515], 40.00th=[ 562], 50.00th=[ 611], 60.00th=[ 660], 00:34:05.038 | 70.00th=[ 725], 80.00th=[ 775], 90.00th=[ 848], 95.00th=[ 922], 00:34:05.038 | 99.00th=[ 1004], 99.50th=[ 1037], 99.90th=[ 1156], 99.95th=[ 1156], 00:34:05.038 | 99.99th=[ 1156] 00:34:05.038 bw ( KiB/s): min= 4096, max= 4096, per=39.03%, avg=4096.00, stdev= 0.00, samples=1 00:34:05.038 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:05.038 lat (usec) : 250=0.08%, 500=16.00%, 750=28.35%, 1000=34.12% 00:34:05.038 lat (msec) : 2=21.45% 00:34:05.038 cpu : usr=2.30%, sys=3.20%, ctx=1233, majf=0, minf=1 00:34:05.038 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:05.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.038 issued rwts: total=512,719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:05.038 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:05.038 job1: (groupid=0, jobs=1): err= 0: pid=318463: Mon Dec 9 12:09:12 2024 00:34:05.038 read: IOPS=15, BW=61.7KiB/s (63.2kB/s)(64.0KiB/1037msec) 00:34:05.038 slat (nsec): min=25159, max=29129, avg=25794.94, stdev=931.36 00:34:05.038 clat (usec): min=41473, max=42078, avg=41929.39, stdev=136.33 00:34:05.038 lat (usec): min=41499, max=42103, avg=41955.18, stdev=136.44 00:34:05.038 clat percentiles (usec): 00:34:05.038 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:34:05.038 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:34:05.038 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:05.038 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:05.038 | 99.99th=[42206] 00:34:05.038 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:34:05.038 slat (nsec): min=9437, max=50168, avg=29390.08, stdev=7911.29 00:34:05.038 clat (usec): min=269, max=1032, avg=675.90, stdev=125.95 00:34:05.038 lat (usec): min=299, max=1077, avg=705.29, stdev=128.23 00:34:05.038 clat percentiles (usec): 00:34:05.038 | 1.00th=[ 383], 5.00th=[ 445], 10.00th=[ 506], 20.00th=[ 570], 00:34:05.038 | 30.00th=[ 619], 40.00th=[ 652], 50.00th=[ 676], 60.00th=[ 717], 00:34:05.038 | 70.00th=[ 750], 80.00th=[ 783], 90.00th=[ 824], 95.00th=[ 873], 00:34:05.038 | 99.00th=[ 930], 99.50th=[ 1020], 99.90th=[ 1037], 99.95th=[ 1037], 00:34:05.038 | 99.99th=[ 1037] 00:34:05.038 bw ( KiB/s): min= 4096, max= 4096, per=39.03%, avg=4096.00, stdev= 0.00, samples=1 00:34:05.038 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:05.038 lat (usec) : 500=8.71%, 750=58.33%, 1000=29.36% 00:34:05.038 lat (msec) : 2=0.57%, 50=3.03% 00:34:05.038 cpu : usr=0.97%, sys=1.25%, ctx=528, majf=0, minf=1 00:34:05.038 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:05.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.038 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:05.038 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:05.038 job2: (groupid=0, jobs=1): err= 0: pid=318465: Mon Dec 9 12:09:12 2024 00:34:05.038 read: IOPS=16, BW=65.5KiB/s (67.1kB/s)(68.0KiB/1038msec) 00:34:05.038 slat (nsec): min=26359, max=27479, avg=26852.59, stdev=301.80 00:34:05.038 clat (usec): min=1018, max=44865, avg=39526.46, stdev=9958.81 00:34:05.038 lat (usec): min=1045, max=44892, avg=39553.31, stdev=9958.89 00:34:05.039 clat percentiles (usec): 00:34:05.039 | 1.00th=[ 1020], 5.00th=[ 1020], 10.00th=[41157], 20.00th=[41157], 00:34:05.039 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:05.039 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[44827], 00:34:05.039 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:34:05.039 | 99.99th=[44827] 00:34:05.039 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:34:05.039 slat (nsec): min=10054, max=54386, avg=32215.89, stdev=6783.01 00:34:05.039 clat (usec): min=198, max=1145, avg=672.03, stdev=153.18 00:34:05.039 lat (usec): min=210, max=1156, avg=704.25, stdev=154.34 00:34:05.039 clat percentiles (usec): 00:34:05.039 | 1.00th=[ 334], 5.00th=[ 424], 10.00th=[ 482], 20.00th=[ 537], 00:34:05.039 | 30.00th=[ 586], 40.00th=[ 635], 50.00th=[ 676], 60.00th=[ 717], 00:34:05.039 | 70.00th=[ 750], 80.00th=[ 799], 90.00th=[ 848], 95.00th=[ 930], 00:34:05.039 | 99.00th=[ 1037], 99.50th=[ 1074], 99.90th=[ 1139], 99.95th=[ 1139], 00:34:05.039 | 99.99th=[ 1139] 00:34:05.039 bw ( KiB/s): min= 4096, max= 4096, per=39.03%, avg=4096.00, stdev= 0.00, samples=1 00:34:05.039 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:05.039 lat (usec) : 250=0.38%, 500=11.15%, 750=55.77%, 1000=26.84% 00:34:05.039 lat (msec) : 2=2.84%, 50=3.02% 00:34:05.039 cpu : usr=0.96%, sys=1.45%, ctx=531, majf=0, minf=1 00:34:05.039 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:05.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.039 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.039 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:05.039 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:05.039 job3: (groupid=0, jobs=1): err= 0: pid=318466: Mon Dec 9 12:09:12 2024 00:34:05.039 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:05.039 slat (nsec): min=7179, max=63895, avg=26246.57, stdev=4863.29 00:34:05.039 clat (usec): min=474, max=1143, avg=830.88, stdev=131.47 00:34:05.039 lat (usec): min=501, max=1187, avg=857.12, stdev=131.73 00:34:05.039 clat percentiles (usec): 00:34:05.039 | 1.00th=[ 553], 5.00th=[ 586], 10.00th=[ 644], 20.00th=[ 701], 00:34:05.039 | 30.00th=[ 750], 40.00th=[ 799], 50.00th=[ 848], 60.00th=[ 889], 00:34:05.039 | 70.00th=[ 922], 80.00th=[ 955], 90.00th=[ 979], 95.00th=[ 1012], 00:34:05.039 | 99.00th=[ 1074], 99.50th=[ 1106], 99.90th=[ 1139], 99.95th=[ 1139], 00:34:05.039 | 99.99th=[ 1139] 00:34:05.039 write: IOPS=979, BW=3916KiB/s (4010kB/s)(3920KiB/1001msec); 0 zone resets 00:34:05.039 slat (nsec): min=9713, max=78750, avg=31403.48, stdev=8749.75 00:34:05.039 clat (usec): min=142, max=861, avg=528.80, stdev=122.07 00:34:05.039 lat (usec): min=152, max=909, avg=560.21, stdev=124.85 00:34:05.039 clat percentiles (usec): 00:34:05.039 | 1.00th=[ 245], 5.00th=[ 314], 10.00th=[ 371], 20.00th=[ 412], 00:34:05.039 | 30.00th=[ 474], 40.00th=[ 506], 50.00th=[ 529], 60.00th=[ 562], 00:34:05.039 | 70.00th=[ 603], 80.00th=[ 635], 90.00th=[ 676], 95.00th=[ 717], 00:34:05.039 | 99.00th=[ 799], 99.50th=[ 848], 99.90th=[ 865], 99.95th=[ 865], 00:34:05.039 | 99.99th=[ 865] 00:34:05.039 bw ( KiB/s): min= 4096, max= 4096, per=39.03%, avg=4096.00, stdev= 0.00, samples=1 00:34:05.039 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:05.039 lat (usec) : 250=0.74%, 500=23.93%, 750=49.33%, 1000=23.59% 00:34:05.039 lat (msec) : 2=2.41% 00:34:05.039 cpu : usr=2.70%, sys=4.00%, ctx=1494, majf=0, minf=1 00:34:05.039 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:05.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.039 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.039 issued rwts: total=512,980,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:05.039 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:05.039 00:34:05.039 Run status group 0 (all jobs): 00:34:05.039 READ: bw=4073KiB/s (4171kB/s), 61.7KiB/s-2046KiB/s (63.2kB/s-2095kB/s), io=4228KiB (4329kB), run=1001-1038msec 00:34:05.039 WRITE: bw=10.2MiB/s (10.7MB/s), 1973KiB/s-3916KiB/s (2020kB/s-4010kB/s), io=10.6MiB (11.2MB), run=1001-1038msec 00:34:05.039 00:34:05.039 Disk stats (read/write): 00:34:05.039 nvme0n1: ios=505/512, merge=0/0, ticks=583/331, in_queue=914, util=95.89% 00:34:05.039 nvme0n2: ios=61/512, merge=0/0, ticks=568/325, in_queue=893, util=91.64% 00:34:05.039 nvme0n3: ios=55/512, merge=0/0, ticks=1156/334, in_queue=1490, util=96.41% 00:34:05.039 nvme0n4: ios=555/677, merge=0/0, ticks=1223/336, in_queue=1559, util=96.90% 00:34:05.039 12:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:05.039 [global] 00:34:05.039 thread=1 00:34:05.039 invalidate=1 00:34:05.039 rw=write 00:34:05.039 time_based=1 00:34:05.039 runtime=1 00:34:05.039 ioengine=libaio 00:34:05.039 direct=1 00:34:05.039 bs=4096 00:34:05.039 iodepth=128 00:34:05.039 norandommap=0 00:34:05.039 numjobs=1 00:34:05.039 00:34:05.039 verify_dump=1 00:34:05.039 verify_backlog=512 00:34:05.039 verify_state_save=0 00:34:05.039 do_verify=1 00:34:05.039 verify=crc32c-intel 00:34:05.039 [job0] 00:34:05.039 filename=/dev/nvme0n1 00:34:05.039 [job1] 00:34:05.039 filename=/dev/nvme0n2 00:34:05.039 [job2] 00:34:05.039 filename=/dev/nvme0n3 00:34:05.039 [job3] 00:34:05.039 filename=/dev/nvme0n4 00:34:05.039 Could not set queue depth (nvme0n1) 00:34:05.039 Could not set queue depth (nvme0n2) 00:34:05.039 Could not set queue depth (nvme0n3) 00:34:05.039 Could not set queue depth (nvme0n4) 00:34:05.299 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:05.299 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:05.299 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:05.299 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:05.299 fio-3.35 00:34:05.299 Starting 4 threads 00:34:06.686 00:34:06.686 job0: (groupid=0, jobs=1): err= 0: pid=319082: Mon Dec 9 12:09:14 2024 00:34:06.686 read: IOPS=7649, BW=29.9MiB/s (31.3MB/s)(30.1MiB/1009msec) 00:34:06.686 slat (nsec): min=912, max=13750k, avg=61027.66, stdev=500592.02 00:34:06.686 clat (usec): min=2725, max=39884, avg=8509.15, stdev=3734.61 00:34:06.686 lat (usec): min=2900, max=42090, avg=8570.17, stdev=3768.50 00:34:06.686 clat percentiles (usec): 00:34:06.686 | 1.00th=[ 4228], 5.00th=[ 5145], 10.00th=[ 5669], 20.00th=[ 6259], 00:34:06.686 | 30.00th=[ 6456], 40.00th=[ 6718], 50.00th=[ 7177], 60.00th=[ 7832], 00:34:06.686 | 70.00th=[ 9110], 80.00th=[10552], 90.00th=[12518], 95.00th=[15139], 00:34:06.686 | 99.00th=[23462], 99.50th=[23725], 99.90th=[39060], 99.95th=[39060], 00:34:06.686 | 99.99th=[40109] 00:34:06.686 write: IOPS=8118, BW=31.7MiB/s (33.3MB/s)(32.0MiB/1009msec); 0 zone resets 00:34:06.686 slat (nsec): min=1584, max=9813.0k, avg=53883.71, stdev=374185.93 00:34:06.686 clat (usec): min=513, max=41133, avg=7602.98, stdev=4556.25 00:34:06.686 lat (usec): min=620, max=41142, avg=7656.86, stdev=4582.93 00:34:06.686 clat percentiles (usec): 00:34:06.686 | 1.00th=[ 2507], 5.00th=[ 3785], 10.00th=[ 4293], 20.00th=[ 5145], 00:34:06.686 | 30.00th=[ 5866], 40.00th=[ 6325], 50.00th=[ 6652], 60.00th=[ 6849], 00:34:06.686 | 70.00th=[ 7177], 80.00th=[ 8848], 90.00th=[11600], 95.00th=[14222], 00:34:06.686 | 99.00th=[33817], 99.50th=[38011], 99.90th=[40633], 99.95th=[40633], 00:34:06.686 | 99.99th=[41157] 00:34:06.686 bw ( KiB/s): min=27960, max=36864, per=31.38%, avg=32412.00, stdev=6296.08, samples=2 00:34:06.686 iops : min= 6990, max= 9216, avg=8103.00, stdev=1574.02, samples=2 00:34:06.686 lat (usec) : 750=0.01%, 1000=0.02% 00:34:06.686 lat (msec) : 2=0.26%, 4=3.23%, 10=77.41%, 20=17.05%, 50=2.02% 00:34:06.686 cpu : usr=5.46%, sys=6.85%, ctx=645, majf=0, minf=1 00:34:06.686 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:06.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:06.686 issued rwts: total=7718,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.686 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:06.686 job1: (groupid=0, jobs=1): err= 0: pid=319083: Mon Dec 9 12:09:14 2024 00:34:06.686 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:34:06.686 slat (nsec): min=926, max=14510k, avg=103899.43, stdev=696874.38 00:34:06.686 clat (usec): min=1815, max=33739, avg=13601.58, stdev=4874.28 00:34:06.686 lat (usec): min=1851, max=33745, avg=13705.48, stdev=4923.58 00:34:06.686 clat percentiles (usec): 00:34:06.686 | 1.00th=[ 3458], 5.00th=[ 7242], 10.00th=[ 7570], 20.00th=[ 9634], 00:34:06.686 | 30.00th=[11207], 40.00th=[12518], 50.00th=[13566], 60.00th=[14484], 00:34:06.686 | 70.00th=[15533], 80.00th=[16712], 90.00th=[18220], 95.00th=[20317], 00:34:06.686 | 99.00th=[31589], 99.50th=[32900], 99.90th=[33817], 99.95th=[33817], 00:34:06.686 | 99.99th=[33817] 00:34:06.686 write: IOPS=5035, BW=19.7MiB/s (20.6MB/s)(19.9MiB/1010msec); 0 zone resets 00:34:06.686 slat (nsec): min=1582, max=10390k, avg=92950.48, stdev=582094.91 00:34:06.686 clat (usec): min=2714, max=31086, avg=12710.78, stdev=5177.68 00:34:06.686 lat (usec): min=2723, max=31090, avg=12803.73, stdev=5222.59 00:34:06.686 clat percentiles (usec): 00:34:06.686 | 1.00th=[ 4228], 5.00th=[ 6456], 10.00th=[ 6980], 20.00th=[ 9372], 00:34:06.686 | 30.00th=[ 9634], 40.00th=[10421], 50.00th=[11863], 60.00th=[12518], 00:34:06.686 | 70.00th=[13829], 80.00th=[15926], 90.00th=[20055], 95.00th=[25035], 00:34:06.686 | 99.00th=[28181], 99.50th=[29492], 99.90th=[30278], 99.95th=[30278], 00:34:06.686 | 99.99th=[31065] 00:34:06.686 bw ( KiB/s): min=17736, max=21936, per=19.21%, avg=19836.00, stdev=2969.85, samples=2 00:34:06.686 iops : min= 4434, max= 5484, avg=4959.00, stdev=742.46, samples=2 00:34:06.686 lat (msec) : 2=0.10%, 4=0.88%, 10=27.92%, 20=62.95%, 50=8.15% 00:34:06.686 cpu : usr=3.96%, sys=5.65%, ctx=344, majf=0, minf=2 00:34:06.686 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:06.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:06.686 issued rwts: total=4608,5086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.686 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:06.686 job2: (groupid=0, jobs=1): err= 0: pid=319084: Mon Dec 9 12:09:14 2024 00:34:06.686 read: IOPS=6862, BW=26.8MiB/s (28.1MB/s)(26.9MiB/1004msec) 00:34:06.686 slat (nsec): min=913, max=7054.9k, avg=72542.39, stdev=464643.40 00:34:06.686 clat (usec): min=1377, max=21888, avg=9234.98, stdev=1852.65 00:34:06.686 lat (usec): min=4341, max=21893, avg=9307.52, stdev=1888.13 00:34:06.686 clat percentiles (usec): 00:34:06.686 | 1.00th=[ 5276], 5.00th=[ 6587], 10.00th=[ 7308], 20.00th=[ 8029], 00:34:06.686 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9241], 00:34:06.686 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[11338], 95.00th=[12387], 00:34:06.686 | 99.00th=[16188], 99.50th=[16712], 99.90th=[21890], 99.95th=[21890], 00:34:06.686 | 99.99th=[21890] 00:34:06.686 write: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec); 0 zone resets 00:34:06.686 slat (nsec): min=1552, max=6729.0k, avg=65702.79, stdev=395821.26 00:34:06.686 clat (usec): min=1346, max=22347, avg=8860.87, stdev=1807.47 00:34:06.686 lat (usec): min=1354, max=22355, avg=8926.58, stdev=1819.74 00:34:06.686 clat percentiles (usec): 00:34:06.686 | 1.00th=[ 4817], 5.00th=[ 5932], 10.00th=[ 7439], 20.00th=[ 7963], 00:34:06.686 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8586], 60.00th=[ 8848], 00:34:06.686 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[10945], 95.00th=[11863], 00:34:06.686 | 99.00th=[16909], 99.50th=[17171], 99.90th=[21365], 99.95th=[21365], 00:34:06.686 | 99.99th=[22414] 00:34:06.686 bw ( KiB/s): min=28152, max=29192, per=27.76%, avg=28672.00, stdev=735.39, samples=2 00:34:06.686 iops : min= 7038, max= 7298, avg=7168.00, stdev=183.85, samples=2 00:34:06.686 lat (msec) : 2=0.07%, 4=0.01%, 10=81.47%, 20=18.21%, 50=0.24% 00:34:06.686 cpu : usr=5.38%, sys=5.28%, ctx=608, majf=0, minf=2 00:34:06.686 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:06.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:06.686 issued rwts: total=6890,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.686 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:06.686 job3: (groupid=0, jobs=1): err= 0: pid=319085: Mon Dec 9 12:09:14 2024 00:34:06.686 read: IOPS=5243, BW=20.5MiB/s (21.5MB/s)(20.6MiB/1004msec) 00:34:06.686 slat (nsec): min=912, max=11889k, avg=99085.08, stdev=655225.64 00:34:06.686 clat (usec): min=1300, max=46181, avg=12746.99, stdev=6109.27 00:34:06.686 lat (usec): min=3318, max=46187, avg=12846.08, stdev=6160.67 00:34:06.686 clat percentiles (usec): 00:34:06.686 | 1.00th=[ 4817], 5.00th=[ 6521], 10.00th=[ 7439], 20.00th=[ 8356], 00:34:06.686 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10945], 60.00th=[12649], 00:34:06.686 | 70.00th=[15008], 80.00th=[16450], 90.00th=[18744], 95.00th=[22938], 00:34:06.686 | 99.00th=[38011], 99.50th=[40633], 99.90th=[46400], 99.95th=[46400], 00:34:06.686 | 99.99th=[46400] 00:34:06.686 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:34:06.686 slat (nsec): min=1559, max=9553.1k, avg=80170.21, stdev=514550.61 00:34:06.686 clat (usec): min=4019, max=34302, avg=10620.97, stdev=3064.54 00:34:06.686 lat (usec): min=4023, max=34310, avg=10701.14, stdev=3113.57 00:34:06.686 clat percentiles (usec): 00:34:06.686 | 1.00th=[ 4817], 5.00th=[ 7111], 10.00th=[ 7963], 20.00th=[ 8455], 00:34:06.686 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[10552], 00:34:06.686 | 70.00th=[11994], 80.00th=[13042], 90.00th=[13960], 95.00th=[14746], 00:34:06.686 | 99.00th=[22414], 99.50th=[27132], 99.90th=[27132], 99.95th=[27132], 00:34:06.686 | 99.99th=[34341] 00:34:06.687 bw ( KiB/s): min=20480, max=24576, per=21.81%, avg=22528.00, stdev=2896.31, samples=2 00:34:06.687 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:34:06.687 lat (msec) : 2=0.01%, 4=0.29%, 10=47.59%, 20=47.44%, 50=4.67% 00:34:06.687 cpu : usr=3.39%, sys=5.78%, ctx=377, majf=0, minf=2 00:34:06.687 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:06.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:06.687 issued rwts: total=5264,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.687 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:06.687 00:34:06.687 Run status group 0 (all jobs): 00:34:06.687 READ: bw=94.7MiB/s (99.3MB/s), 17.8MiB/s-29.9MiB/s (18.7MB/s-31.3MB/s), io=95.6MiB (100MB), run=1004-1010msec 00:34:06.687 WRITE: bw=101MiB/s (106MB/s), 19.7MiB/s-31.7MiB/s (20.6MB/s-33.3MB/s), io=102MiB (107MB), run=1004-1010msec 00:34:06.687 00:34:06.687 Disk stats (read/write): 00:34:06.687 nvme0n1: ios=6928/7168, merge=0/0, ticks=49442/42634, in_queue=92076, util=87.17% 00:34:06.687 nvme0n2: ios=3634/3668, merge=0/0, ticks=23927/24994, in_queue=48921, util=86.88% 00:34:06.687 nvme0n3: ios=5238/5632, merge=0/0, ticks=23104/22976, in_queue=46080, util=90.78% 00:34:06.687 nvme0n4: ios=3641/4076, merge=0/0, ticks=22134/17822, in_queue=39956, util=98.54% 00:34:06.687 12:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:06.687 [global] 00:34:06.687 thread=1 00:34:06.687 invalidate=1 00:34:06.687 rw=randwrite 00:34:06.687 time_based=1 00:34:06.687 runtime=1 00:34:06.687 ioengine=libaio 00:34:06.687 direct=1 00:34:06.687 bs=4096 00:34:06.687 iodepth=128 00:34:06.687 norandommap=0 00:34:06.687 numjobs=1 00:34:06.687 00:34:06.687 verify_dump=1 00:34:06.687 verify_backlog=512 00:34:06.687 verify_state_save=0 00:34:06.687 do_verify=1 00:34:06.687 verify=crc32c-intel 00:34:06.687 [job0] 00:34:06.687 filename=/dev/nvme0n1 00:34:06.687 [job1] 00:34:06.687 filename=/dev/nvme0n2 00:34:06.687 [job2] 00:34:06.687 filename=/dev/nvme0n3 00:34:06.687 [job3] 00:34:06.687 filename=/dev/nvme0n4 00:34:06.687 Could not set queue depth (nvme0n1) 00:34:06.687 Could not set queue depth (nvme0n2) 00:34:06.687 Could not set queue depth (nvme0n3) 00:34:06.687 Could not set queue depth (nvme0n4) 00:34:06.947 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:06.947 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:06.947 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:06.947 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:06.947 fio-3.35 00:34:06.947 Starting 4 threads 00:34:08.331 00:34:08.331 job0: (groupid=0, jobs=1): err= 0: pid=319599: Mon Dec 9 12:09:15 2024 00:34:08.331 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:34:08.331 slat (nsec): min=903, max=15640k, avg=106032.95, stdev=793351.56 00:34:08.331 clat (usec): min=3062, max=39420, avg=13031.88, stdev=6583.65 00:34:08.331 lat (usec): min=3093, max=39527, avg=13137.91, stdev=6669.95 00:34:08.331 clat percentiles (usec): 00:34:08.331 | 1.00th=[ 5997], 5.00th=[ 6456], 10.00th=[ 6980], 20.00th=[ 7832], 00:34:08.331 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[11076], 60.00th=[12649], 00:34:08.331 | 70.00th=[14615], 80.00th=[17957], 90.00th=[24773], 95.00th=[27132], 00:34:08.331 | 99.00th=[31851], 99.50th=[32900], 99.90th=[39584], 99.95th=[39584], 00:34:08.331 | 99.99th=[39584] 00:34:08.331 write: IOPS=4768, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1004msec); 0 zone resets 00:34:08.331 slat (nsec): min=1575, max=33907k, avg=96912.53, stdev=690845.23 00:34:08.331 clat (usec): min=1581, max=53917, avg=14026.71, stdev=9090.31 00:34:08.331 lat (usec): min=1901, max=53925, avg=14123.63, stdev=9141.22 00:34:08.331 clat percentiles (usec): 00:34:08.331 | 1.00th=[ 2999], 5.00th=[ 3949], 10.00th=[ 4621], 20.00th=[ 6849], 00:34:08.331 | 30.00th=[ 7963], 40.00th=[ 8979], 50.00th=[11863], 60.00th=[13829], 00:34:08.331 | 70.00th=[18220], 80.00th=[20579], 90.00th=[24249], 95.00th=[32113], 00:34:08.331 | 99.00th=[49021], 99.50th=[52167], 99.90th=[53740], 99.95th=[53740], 00:34:08.331 | 99.99th=[53740] 00:34:08.331 bw ( KiB/s): min=13896, max=23384, per=21.68%, avg=18640.00, stdev=6709.03, samples=2 00:34:08.331 iops : min= 3474, max= 5846, avg=4660.00, stdev=1677.26, samples=2 00:34:08.331 lat (msec) : 2=0.09%, 4=3.31%, 10=41.87%, 20=35.33%, 50=19.07% 00:34:08.331 lat (msec) : 100=0.33% 00:34:08.331 cpu : usr=2.39%, sys=5.08%, ctx=436, majf=0, minf=1 00:34:08.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:34:08.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:08.331 issued rwts: total=4608,4788,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.331 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:08.331 job1: (groupid=0, jobs=1): err= 0: pid=319601: Mon Dec 9 12:09:15 2024 00:34:08.331 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:34:08.331 slat (nsec): min=940, max=9762.6k, avg=93883.12, stdev=628977.18 00:34:08.331 clat (usec): min=2322, max=56006, avg=11007.84, stdev=6344.78 00:34:08.331 lat (usec): min=2327, max=56012, avg=11101.72, stdev=6406.46 00:34:08.331 clat percentiles (usec): 00:34:08.331 | 1.00th=[ 3228], 5.00th=[ 5080], 10.00th=[ 5997], 20.00th=[ 6521], 00:34:08.331 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 9634], 60.00th=[11076], 00:34:08.331 | 70.00th=[11863], 80.00th=[13960], 90.00th=[16319], 95.00th=[20055], 00:34:08.331 | 99.00th=[42206], 99.50th=[49021], 99.90th=[55837], 99.95th=[55837], 00:34:08.331 | 99.99th=[55837] 00:34:08.331 write: IOPS=4053, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:34:08.331 slat (nsec): min=1581, max=14944k, avg=149508.66, stdev=801437.01 00:34:08.331 clat (usec): min=1128, max=85980, avg=21590.77, stdev=20133.50 00:34:08.331 lat (usec): min=1137, max=85988, avg=21740.28, stdev=20268.47 00:34:08.331 clat percentiles (usec): 00:34:08.331 | 1.00th=[ 2008], 5.00th=[ 3425], 10.00th=[ 5276], 20.00th=[ 6849], 00:34:08.331 | 30.00th=[10421], 40.00th=[11863], 50.00th=[12125], 60.00th=[14746], 00:34:08.331 | 70.00th=[20317], 80.00th=[37487], 90.00th=[56886], 95.00th=[59507], 00:34:08.331 | 99.00th=[81265], 99.50th=[82314], 99.90th=[85459], 99.95th=[85459], 00:34:08.331 | 99.99th=[85459] 00:34:08.331 bw ( KiB/s): min=13192, max=18472, per=18.42%, avg=15832.00, stdev=3733.52, samples=2 00:34:08.331 iops : min= 3298, max= 4618, avg=3958.00, stdev=933.38, samples=2 00:34:08.331 lat (msec) : 2=0.46%, 4=3.70%, 10=36.39%, 20=40.66%, 50=10.38% 00:34:08.331 lat (msec) : 100=8.41% 00:34:08.331 cpu : usr=2.78%, sys=4.37%, ctx=369, majf=0, minf=1 00:34:08.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:34:08.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:08.331 issued rwts: total=3584,4086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.331 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:08.331 job2: (groupid=0, jobs=1): err= 0: pid=319602: Mon Dec 9 12:09:15 2024 00:34:08.331 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:34:08.331 slat (nsec): min=933, max=7060.8k, avg=72390.92, stdev=470132.58 00:34:08.331 clat (usec): min=4162, max=23183, avg=8846.04, stdev=2612.80 00:34:08.331 lat (usec): min=4290, max=23192, avg=8918.43, stdev=2658.38 00:34:08.331 clat percentiles (usec): 00:34:08.331 | 1.00th=[ 5014], 5.00th=[ 5669], 10.00th=[ 6259], 20.00th=[ 6849], 00:34:08.331 | 30.00th=[ 7111], 40.00th=[ 7504], 50.00th=[ 8291], 60.00th=[ 9110], 00:34:08.331 | 70.00th=[ 9503], 80.00th=[10421], 90.00th=[12649], 95.00th=[14222], 00:34:08.331 | 99.00th=[17433], 99.50th=[19792], 99.90th=[20841], 99.95th=[23200], 00:34:08.331 | 99.99th=[23200] 00:34:08.331 write: IOPS=6627, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1003msec); 0 zone resets 00:34:08.331 slat (nsec): min=1531, max=5560.3k, avg=79485.17, stdev=390643.07 00:34:08.331 clat (usec): min=2846, max=43507, avg=10928.26, stdev=6566.47 00:34:08.331 lat (usec): min=3145, max=43514, avg=11007.75, stdev=6612.04 00:34:08.331 clat percentiles (usec): 00:34:08.331 | 1.00th=[ 4293], 5.00th=[ 5932], 10.00th=[ 6194], 20.00th=[ 6456], 00:34:08.331 | 30.00th=[ 6652], 40.00th=[ 7242], 50.00th=[ 8225], 60.00th=[ 9503], 00:34:08.331 | 70.00th=[11600], 80.00th=[14353], 90.00th=[20317], 95.00th=[25035], 00:34:08.331 | 99.00th=[37487], 99.50th=[40109], 99.90th=[43254], 99.95th=[43254], 00:34:08.331 | 99.99th=[43254] 00:34:08.331 bw ( KiB/s): min=20480, max=31680, per=30.34%, avg=26080.00, stdev=7919.60, samples=2 00:34:08.331 iops : min= 5120, max= 7920, avg=6520.00, stdev=1979.90, samples=2 00:34:08.331 lat (msec) : 4=0.22%, 10=67.99%, 20=25.89%, 50=5.90% 00:34:08.331 cpu : usr=3.49%, sys=5.79%, ctx=770, majf=0, minf=1 00:34:08.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:08.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:08.332 issued rwts: total=6144,6647,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:08.332 job3: (groupid=0, jobs=1): err= 0: pid=319603: Mon Dec 9 12:09:15 2024 00:34:08.332 read: IOPS=5736, BW=22.4MiB/s (23.5MB/s)(22.5MiB/1004msec) 00:34:08.332 slat (nsec): min=1013, max=13706k, avg=89102.00, stdev=637466.44 00:34:08.332 clat (usec): min=2945, max=54756, avg=10851.48, stdev=6577.50 00:34:08.332 lat (usec): min=3613, max=54767, avg=10940.59, stdev=6635.38 00:34:08.332 clat percentiles (usec): 00:34:08.332 | 1.00th=[ 4883], 5.00th=[ 6390], 10.00th=[ 6587], 20.00th=[ 7046], 00:34:08.332 | 30.00th=[ 7308], 40.00th=[ 8291], 50.00th=[ 8979], 60.00th=[10028], 00:34:08.332 | 70.00th=[11338], 80.00th=[12256], 90.00th=[15795], 95.00th=[19792], 00:34:08.332 | 99.00th=[45876], 99.50th=[49021], 99.90th=[51119], 99.95th=[54789], 00:34:08.332 | 99.99th=[54789] 00:34:08.332 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:34:08.332 slat (nsec): min=1626, max=10164k, avg=73611.09, stdev=491382.75 00:34:08.332 clat (usec): min=1200, max=54724, avg=10558.13, stdev=7268.62 00:34:08.332 lat (usec): min=1213, max=54726, avg=10631.74, stdev=7305.47 00:34:08.332 clat percentiles (usec): 00:34:08.332 | 1.00th=[ 3392], 5.00th=[ 4228], 10.00th=[ 4555], 20.00th=[ 6390], 00:34:08.332 | 30.00th=[ 6849], 40.00th=[ 7046], 50.00th=[ 7373], 60.00th=[ 8979], 00:34:08.332 | 70.00th=[11600], 80.00th=[12518], 90.00th=[20579], 95.00th=[27657], 00:34:08.332 | 99.00th=[36963], 99.50th=[41681], 99.90th=[43779], 99.95th=[51119], 00:34:08.332 | 99.99th=[54789] 00:34:08.332 bw ( KiB/s): min=23816, max=25336, per=28.59%, avg=24576.00, stdev=1074.80, samples=2 00:34:08.332 iops : min= 5954, max= 6334, avg=6144.00, stdev=268.70, samples=2 00:34:08.332 lat (msec) : 2=0.08%, 4=0.84%, 10=61.09%, 20=30.11%, 50=7.64% 00:34:08.332 lat (msec) : 100=0.25% 00:34:08.332 cpu : usr=4.69%, sys=6.78%, ctx=372, majf=0, minf=2 00:34:08.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:08.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:08.332 issued rwts: total=5759,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:08.332 00:34:08.332 Run status group 0 (all jobs): 00:34:08.332 READ: bw=77.9MiB/s (81.7MB/s), 13.9MiB/s-23.9MiB/s (14.6MB/s-25.1MB/s), io=78.5MiB (82.3MB), run=1003-1008msec 00:34:08.332 WRITE: bw=84.0MiB/s (88.0MB/s), 15.8MiB/s-25.9MiB/s (16.6MB/s-27.1MB/s), io=84.6MiB (88.7MB), run=1003-1008msec 00:34:08.332 00:34:08.332 Disk stats (read/write): 00:34:08.332 nvme0n1: ios=4133/4439, merge=0/0, ticks=26043/28375, in_queue=54418, util=88.38% 00:34:08.332 nvme0n2: ios=3617/3607, merge=0/0, ticks=37967/64267, in_queue=102234, util=96.23% 00:34:08.332 nvme0n3: ios=4664/5071, merge=0/0, ticks=21225/29046, in_queue=50271, util=92.62% 00:34:08.332 nvme0n4: ios=4665/5007, merge=0/0, ticks=48780/51414, in_queue=100194, util=96.16% 00:34:08.332 12:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:08.332 12:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=319898 00:34:08.332 12:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:08.332 12:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:08.332 [global] 00:34:08.332 thread=1 00:34:08.332 invalidate=1 00:34:08.332 rw=read 00:34:08.332 time_based=1 00:34:08.332 runtime=10 00:34:08.332 ioengine=libaio 00:34:08.332 direct=1 00:34:08.332 bs=4096 00:34:08.332 iodepth=1 00:34:08.332 norandommap=1 00:34:08.332 numjobs=1 00:34:08.332 00:34:08.332 [job0] 00:34:08.332 filename=/dev/nvme0n1 00:34:08.332 [job1] 00:34:08.332 filename=/dev/nvme0n2 00:34:08.332 [job2] 00:34:08.332 filename=/dev/nvme0n3 00:34:08.332 [job3] 00:34:08.332 filename=/dev/nvme0n4 00:34:08.332 Could not set queue depth (nvme0n1) 00:34:08.332 Could not set queue depth (nvme0n2) 00:34:08.332 Could not set queue depth (nvme0n3) 00:34:08.332 Could not set queue depth (nvme0n4) 00:34:08.592 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:08.592 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:08.592 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:08.592 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:08.592 fio-3.35 00:34:08.592 Starting 4 threads 00:34:11.138 12:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:11.400 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=745472, buflen=4096 00:34:11.400 fio: pid=320129, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:11.400 12:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:11.661 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=9580544, buflen=4096 00:34:11.661 fio: pid=320128, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:11.661 12:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:11.661 12:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:11.923 12:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:11.923 12:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:11.923 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1536000, buflen=4096 00:34:11.923 fio: pid=320126, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:11.923 12:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:11.923 12:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:11.923 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=679936, buflen=4096 00:34:11.923 fio: pid=320127, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:11.923 00:34:11.923 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=320126: Mon Dec 9 12:09:19 2024 00:34:11.923 read: IOPS=125, BW=500KiB/s (511kB/s)(1500KiB/3003msec) 00:34:11.923 slat (usec): min=6, max=232, avg=24.22, stdev=12.89 00:34:11.923 clat (usec): min=419, max=42748, avg=7919.43, stdev=15484.64 00:34:11.923 lat (usec): min=427, max=42981, avg=7943.65, stdev=15487.00 00:34:11.923 clat percentiles (usec): 00:34:11.923 | 1.00th=[ 453], 5.00th=[ 652], 10.00th=[ 676], 20.00th=[ 734], 00:34:11.923 | 30.00th=[ 766], 40.00th=[ 775], 50.00th=[ 791], 60.00th=[ 807], 00:34:11.923 | 70.00th=[ 857], 80.00th=[ 1074], 90.00th=[41681], 95.00th=[42206], 00:34:11.923 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:34:11.923 | 99.99th=[42730] 00:34:11.923 bw ( KiB/s): min= 96, max= 1592, per=14.51%, avg=561.60, stdev=643.62, samples=5 00:34:11.923 iops : min= 24, max= 398, avg=140.40, stdev=160.91, samples=5 00:34:11.923 lat (usec) : 500=1.06%, 750=23.14%, 1000=54.26% 00:34:11.923 lat (msec) : 2=3.72%, 50=17.55% 00:34:11.923 cpu : usr=0.13%, sys=0.30%, ctx=378, majf=0, minf=1 00:34:11.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.923 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.923 issued rwts: total=376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:11.923 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=320127: Mon Dec 9 12:09:19 2024 00:34:11.924 read: IOPS=52, BW=210KiB/s (215kB/s)(664KiB/3168msec) 00:34:11.924 slat (usec): min=6, max=9580, avg=122.83, stdev=877.20 00:34:11.924 clat (usec): min=530, max=41954, avg=18821.65, stdev=19969.54 00:34:11.924 lat (usec): min=556, max=41980, avg=18945.07, stdev=19911.67 00:34:11.924 clat percentiles (usec): 00:34:11.924 | 1.00th=[ 562], 5.00th=[ 693], 10.00th=[ 775], 20.00th=[ 840], 00:34:11.924 | 30.00th=[ 979], 40.00th=[ 1090], 50.00th=[ 1188], 60.00th=[40633], 00:34:11.924 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:11.924 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:11.924 | 99.99th=[42206] 00:34:11.924 bw ( KiB/s): min= 96, max= 691, per=5.10%, avg=197.83, stdev=241.63, samples=6 00:34:11.924 iops : min= 24, max= 172, avg=49.33, stdev=60.10, samples=6 00:34:11.924 lat (usec) : 750=8.98%, 1000=21.56% 00:34:11.924 lat (msec) : 2=23.95%, 4=0.60%, 50=44.31% 00:34:11.924 cpu : usr=0.06%, sys=0.16%, ctx=171, majf=0, minf=2 00:34:11.924 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.924 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.924 issued rwts: total=167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.924 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:11.924 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=320128: Mon Dec 9 12:09:19 2024 00:34:11.924 read: IOPS=845, BW=3383KiB/s (3464kB/s)(9356KiB/2766msec) 00:34:11.924 slat (usec): min=7, max=14633, avg=34.28, stdev=336.26 00:34:11.924 clat (usec): min=174, max=41575, avg=1137.02, stdev=3983.34 00:34:11.924 lat (usec): min=181, max=41583, avg=1171.31, stdev=3996.99 00:34:11.924 clat percentiles (usec): 00:34:11.924 | 1.00th=[ 343], 5.00th=[ 603], 10.00th=[ 644], 20.00th=[ 701], 00:34:11.924 | 30.00th=[ 725], 40.00th=[ 750], 50.00th=[ 758], 60.00th=[ 766], 00:34:11.924 | 70.00th=[ 783], 80.00th=[ 799], 90.00th=[ 824], 95.00th=[ 848], 00:34:11.924 | 99.00th=[ 996], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:34:11.924 | 99.99th=[41681] 00:34:11.924 bw ( KiB/s): min= 312, max= 5256, per=84.53%, avg=3268.80, stdev=2212.90, samples=5 00:34:11.924 iops : min= 78, max= 1314, avg=817.20, stdev=553.22, samples=5 00:34:11.924 lat (usec) : 250=0.38%, 500=1.84%, 750=41.07%, 1000=55.68% 00:34:11.924 lat (msec) : 50=0.98% 00:34:11.924 cpu : usr=0.90%, sys=2.39%, ctx=2345, majf=0, minf=2 00:34:11.924 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.924 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.924 issued rwts: total=2340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.924 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:11.924 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=320129: Mon Dec 9 12:09:19 2024 00:34:11.924 read: IOPS=70, BW=281KiB/s (287kB/s)(728KiB/2594msec) 00:34:11.924 slat (nsec): min=7230, max=74688, avg=25935.01, stdev=7065.07 00:34:11.924 clat (usec): min=443, max=41844, avg=14099.27, stdev=19005.56 00:34:11.924 lat (usec): min=469, max=41873, avg=14125.20, stdev=19006.17 00:34:11.924 clat percentiles (usec): 00:34:11.924 | 1.00th=[ 523], 5.00th=[ 635], 10.00th=[ 676], 20.00th=[ 734], 00:34:11.924 | 30.00th=[ 807], 40.00th=[ 848], 50.00th=[ 881], 60.00th=[ 947], 00:34:11.924 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:11.924 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:34:11.924 | 99.99th=[41681] 00:34:11.924 bw ( KiB/s): min= 96, max= 384, per=7.45%, avg=288.00, stdev=115.24, samples=5 00:34:11.924 iops : min= 24, max= 96, avg=72.00, stdev=28.81, samples=5 00:34:11.924 lat (usec) : 500=0.55%, 750=21.31%, 1000=41.53% 00:34:11.924 lat (msec) : 2=3.28%, 50=32.79% 00:34:11.924 cpu : usr=0.00%, sys=0.27%, ctx=183, majf=0, minf=2 00:34:11.924 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.924 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.924 issued rwts: total=183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.924 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:11.924 00:34:11.924 Run status group 0 (all jobs): 00:34:11.924 READ: bw=3866KiB/s (3959kB/s), 210KiB/s-3383KiB/s (215kB/s-3464kB/s), io=12.0MiB (12.5MB), run=2594-3168msec 00:34:11.924 00:34:11.924 Disk stats (read/write): 00:34:11.924 nvme0n1: ios=371/0, merge=0/0, ticks=2798/0, in_queue=2798, util=94.77% 00:34:11.924 nvme0n2: ios=164/0, merge=0/0, ticks=3044/0, in_queue=3044, util=95.24% 00:34:11.924 nvme0n3: ios=2196/0, merge=0/0, ticks=3331/0, in_queue=3331, util=99.56% 00:34:11.924 nvme0n4: ios=183/0, merge=0/0, ticks=2569/0, in_queue=2569, util=96.02% 00:34:12.185 12:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:12.185 12:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:12.446 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:12.446 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:12.446 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:12.446 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:12.707 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:12.707 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:12.969 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:12.969 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 319898 00:34:12.969 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:12.969 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:12.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:12.969 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:12.969 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:34:12.969 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:12.969 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:12.969 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:12.969 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:12.969 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:34:12.969 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:12.969 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:12.969 nvmf hotplug test: fio failed as expected 00:34:12.969 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:13.231 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:13.231 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:13.231 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:13.231 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:13.231 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:13.231 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:13.231 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@122 -- # sync 00:34:13.231 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:34:13.231 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # set +e 00:34:13.231 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # for i in {1..20} 00:34:13.231 12:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:34:13.231 rmmod nvme_tcp 00:34:13.231 rmmod nvme_fabrics 00:34:13.231 rmmod nvme_keyring 00:34:13.231 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:34:13.231 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # set -e 00:34:13.231 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@130 -- # return 0 00:34:13.231 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 316063 ']' 00:34:13.231 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 316063 00:34:13.231 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 316063 ']' 00:34:13.231 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 316063 00:34:13.231 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:34:13.231 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:13.231 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 316063 00:34:13.231 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:13.231 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:13.231 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 316063' 00:34:13.231 killing process with pid 316063 00:34:13.231 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 316063 00:34:13.231 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 316063 00:34:13.493 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:13.493 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:13.493 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:13.493 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # iptr 00:34:13.493 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:34:13.493 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:34:13.493 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:13.493 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:13.493 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # remove_spdk_ns 00:34:13.493 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.493 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:13.493 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:15.405 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:34:15.405 00:34:15.405 real 0m27.804s 00:34:15.405 user 2m12.661s 00:34:15.405 sys 0m11.919s 00:34:15.405 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:15.405 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:15.405 ************************************ 00:34:15.405 END TEST nvmf_fio_target 00:34:15.405 ************************************ 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:15.666 ************************************ 00:34:15.666 START TEST nvmf_bdevio 00:34:15.666 ************************************ 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:15.666 * Looking for test storage... 00:34:15.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:15.666 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:15.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.928 --rc genhtml_branch_coverage=1 00:34:15.928 --rc genhtml_function_coverage=1 00:34:15.928 --rc genhtml_legend=1 00:34:15.928 --rc geninfo_all_blocks=1 00:34:15.928 --rc geninfo_unexecuted_blocks=1 00:34:15.928 00:34:15.928 ' 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:15.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.928 --rc genhtml_branch_coverage=1 00:34:15.928 --rc genhtml_function_coverage=1 00:34:15.928 --rc genhtml_legend=1 00:34:15.928 --rc geninfo_all_blocks=1 00:34:15.928 --rc geninfo_unexecuted_blocks=1 00:34:15.928 00:34:15.928 ' 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:15.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.928 --rc genhtml_branch_coverage=1 00:34:15.928 --rc genhtml_function_coverage=1 00:34:15.928 --rc genhtml_legend=1 00:34:15.928 --rc geninfo_all_blocks=1 00:34:15.928 --rc geninfo_unexecuted_blocks=1 00:34:15.928 00:34:15.928 ' 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:15.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.928 --rc genhtml_branch_coverage=1 00:34:15.928 --rc genhtml_function_coverage=1 00:34:15.928 --rc genhtml_legend=1 00:34:15.928 --rc geninfo_all_blocks=1 00:34:15.928 --rc geninfo_unexecuted_blocks=1 00:34:15.928 00:34:15.928 ' 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:15.928 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # : 0 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # '[' 1 -eq 1 ']' 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@35 -- # NVMF_APP+=(--interrupt-mode) 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@56 -- # have_pci_nics=0 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@310 -- # xtrace_disable 00:34:15.929 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_devs=() 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_devs 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_net_devs=() 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@318 -- # pci_drivers=() 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@318 -- # local -A pci_drivers 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # net_devs=() 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga net_devs 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # e810=() 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga e810 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # x722=() 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga x722 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@323 -- # mlx=() 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@323 -- # local -ga mlx 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:22.519 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:34:22.519 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:22.520 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:22.520 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:22.520 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:34:22.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:22.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.733 ms 00:34:22.520 00:34:22.520 --- 10.0.0.2 ping statistics --- 00:34:22.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.520 rtt min/avg/max/mdev = 0.733/0.733/0.733/0.000 ms 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:22.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:22.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:34:22.520 00:34:22.520 --- 10.0.0.1 ping statistics --- 00:34:22.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.520 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:22.520 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:22.782 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:22.782 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:22.782 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:22.782 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:22.782 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=324992 00:34:22.782 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 324992 00:34:22.782 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:22.782 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 324992 ']' 00:34:22.782 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.782 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:22.782 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.782 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:22.782 12:09:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:22.782 [2024-12-09 12:09:30.499113] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:22.782 [2024-12-09 12:09:30.500236] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:34:22.782 [2024-12-09 12:09:30.500289] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:22.782 [2024-12-09 12:09:30.599850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:22.782 [2024-12-09 12:09:30.653452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:22.782 [2024-12-09 12:09:30.653506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:22.782 [2024-12-09 12:09:30.653515] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:22.782 [2024-12-09 12:09:30.653523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:22.782 [2024-12-09 12:09:30.653529] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:22.782 [2024-12-09 12:09:30.655815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:22.782 [2024-12-09 12:09:30.655977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:22.782 [2024-12-09 12:09:30.656134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:22.782 [2024-12-09 12:09:30.656135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:23.046 [2024-12-09 12:09:30.734772] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:23.046 [2024-12-09 12:09:30.735665] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:23.046 [2024-12-09 12:09:30.736007] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:23.046 [2024-12-09 12:09:30.736509] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:23.046 [2024-12-09 12:09:30.736527] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:23.617 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:23.617 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:34:23.617 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:23.617 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:23.617 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:23.617 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:23.617 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:23.617 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.617 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:23.617 [2024-12-09 12:09:31.369149] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:23.617 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.617 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:23.617 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.617 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:23.617 Malloc0 00:34:23.617 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.617 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:23.617 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.618 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:23.618 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.618 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:23.618 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.618 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:23.618 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.618 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:23.618 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.618 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:23.618 [2024-12-09 12:09:31.453498] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:23.618 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.618 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:23.618 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:23.618 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:34:23.618 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:34:23.618 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:34:23.618 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:34:23.618 { 00:34:23.618 "params": { 00:34:23.618 "name": "Nvme$subsystem", 00:34:23.618 "trtype": "$TEST_TRANSPORT", 00:34:23.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:23.618 "adrfam": "ipv4", 00:34:23.618 "trsvcid": "$NVMF_PORT", 00:34:23.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:23.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:23.618 "hdgst": ${hdgst:-false}, 00:34:23.618 "ddgst": ${ddgst:-false} 00:34:23.618 }, 00:34:23.618 "method": "bdev_nvme_attach_controller" 00:34:23.618 } 00:34:23.618 EOF 00:34:23.618 )") 00:34:23.618 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:34:23.618 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:34:23.618 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:34:23.618 12:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:34:23.618 "params": { 00:34:23.618 "name": "Nvme1", 00:34:23.618 "trtype": "tcp", 00:34:23.618 "traddr": "10.0.0.2", 00:34:23.618 "adrfam": "ipv4", 00:34:23.618 "trsvcid": "4420", 00:34:23.618 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:23.618 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:23.618 "hdgst": false, 00:34:23.618 "ddgst": false 00:34:23.618 }, 00:34:23.618 "method": "bdev_nvme_attach_controller" 00:34:23.618 }' 00:34:23.878 [2024-12-09 12:09:31.511680] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:34:23.878 [2024-12-09 12:09:31.511752] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid325175 ] 00:34:23.878 [2024-12-09 12:09:31.606302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:23.878 [2024-12-09 12:09:31.662403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:23.878 [2024-12-09 12:09:31.662534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:23.878 [2024-12-09 12:09:31.662537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:24.139 I/O targets: 00:34:24.139 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:24.139 00:34:24.139 00:34:24.139 CUnit - A unit testing framework for C - Version 2.1-3 00:34:24.139 http://cunit.sourceforge.net/ 00:34:24.139 00:34:24.139 00:34:24.139 Suite: bdevio tests on: Nvme1n1 00:34:24.139 Test: blockdev write read block ...passed 00:34:24.139 Test: blockdev write zeroes read block ...passed 00:34:24.139 Test: blockdev write zeroes read no split ...passed 00:34:24.139 Test: blockdev write zeroes read split ...passed 00:34:24.139 Test: blockdev write zeroes read split partial ...passed 00:34:24.139 Test: blockdev reset ...[2024-12-09 12:09:32.012898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:24.139 [2024-12-09 12:09:32.012962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2303580 (9): Bad file descriptor 00:34:24.400 [2024-12-09 12:09:32.148205] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:34:24.400 passed 00:34:24.400 Test: blockdev write read 8 blocks ...passed 00:34:24.400 Test: blockdev write read size > 128k ...passed 00:34:24.400 Test: blockdev write read invalid size ...passed 00:34:24.400 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:24.400 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:24.400 Test: blockdev write read max offset ...passed 00:34:24.660 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:24.660 Test: blockdev writev readv 8 blocks ...passed 00:34:24.660 Test: blockdev writev readv 30 x 1block ...passed 00:34:24.660 Test: blockdev writev readv block ...passed 00:34:24.661 Test: blockdev writev readv size > 128k ...passed 00:34:24.661 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:24.661 Test: blockdev comparev and writev ...[2024-12-09 12:09:32.414636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:24.661 [2024-12-09 12:09:32.414665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:24.661 [2024-12-09 12:09:32.414676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:24.661 [2024-12-09 12:09:32.414683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:24.661 [2024-12-09 12:09:32.415207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:24.661 [2024-12-09 12:09:32.415217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:24.661 [2024-12-09 12:09:32.415227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:24.661 [2024-12-09 12:09:32.415233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:24.661 [2024-12-09 12:09:32.415728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:24.661 [2024-12-09 12:09:32.415737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:24.661 [2024-12-09 12:09:32.415747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:24.661 [2024-12-09 12:09:32.415752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:24.661 [2024-12-09 12:09:32.416254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:24.661 [2024-12-09 12:09:32.416263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:24.661 [2024-12-09 12:09:32.416273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:24.661 [2024-12-09 12:09:32.416278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:24.661 passed 00:34:24.661 Test: blockdev nvme passthru rw ...passed 00:34:24.661 Test: blockdev nvme passthru vendor specific ...[2024-12-09 12:09:32.500473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:24.661 [2024-12-09 12:09:32.500485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:24.661 [2024-12-09 12:09:32.500852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:24.661 [2024-12-09 12:09:32.500863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:24.661 [2024-12-09 12:09:32.501215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:24.661 [2024-12-09 12:09:32.501225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:24.661 [2024-12-09 12:09:32.501576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:24.661 [2024-12-09 12:09:32.501584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:24.661 passed 00:34:24.661 Test: blockdev nvme admin passthru ...passed 00:34:24.922 Test: blockdev copy ...passed 00:34:24.922 00:34:24.922 Run Summary: Type Total Ran Passed Failed Inactive 00:34:24.922 suites 1 1 n/a 0 0 00:34:24.922 tests 23 23 23 0 0 00:34:24.922 asserts 152 152 152 0 n/a 00:34:24.922 00:34:24.922 Elapsed time = 1.485 seconds 00:34:24.922 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:24.922 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.922 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:24.922 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.922 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:24.922 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:24.922 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:24.922 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@122 -- # sync 00:34:24.922 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:34:24.922 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # set +e 00:34:24.922 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # for i in {1..20} 00:34:24.922 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:34:24.922 rmmod nvme_tcp 00:34:24.922 rmmod nvme_fabrics 00:34:24.922 rmmod nvme_keyring 00:34:24.922 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:34:24.922 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # set -e 00:34:24.922 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@130 -- # return 0 00:34:24.922 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 324992 ']' 00:34:24.922 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 324992 00:34:24.922 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 324992 ']' 00:34:24.922 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 324992 00:34:24.922 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:34:24.922 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:24.922 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 324992 00:34:25.183 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:34:25.183 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:34:25.183 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 324992' 00:34:25.183 killing process with pid 324992 00:34:25.183 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 324992 00:34:25.183 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 324992 00:34:25.183 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:25.183 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:25.183 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:25.183 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # iptr 00:34:25.183 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:34:25.183 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:25.183 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:34:25.183 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:25.183 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # remove_spdk_ns 00:34:25.183 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:25.183 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:25.183 12:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.730 12:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:34:27.730 00:34:27.730 real 0m11.696s 00:34:27.730 user 0m9.651s 00:34:27.730 sys 0m6.217s 00:34:27.730 12:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:27.730 12:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:27.730 ************************************ 00:34:27.730 END TEST nvmf_bdevio 00:34:27.730 ************************************ 00:34:27.730 12:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:27.730 00:34:27.730 real 4m58.426s 00:34:27.730 user 10m17.061s 00:34:27.730 sys 2m4.083s 00:34:27.730 12:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:27.730 12:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:27.730 ************************************ 00:34:27.730 END TEST nvmf_target_core_interrupt_mode 00:34:27.730 ************************************ 00:34:27.730 12:09:35 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:27.730 12:09:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:27.730 12:09:35 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:27.730 12:09:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:27.730 ************************************ 00:34:27.730 START TEST nvmf_interrupt 00:34:27.730 ************************************ 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:27.730 * Looking for test storage... 00:34:27.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:27.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.730 --rc genhtml_branch_coverage=1 00:34:27.730 --rc genhtml_function_coverage=1 00:34:27.730 --rc genhtml_legend=1 00:34:27.730 --rc geninfo_all_blocks=1 00:34:27.730 --rc geninfo_unexecuted_blocks=1 00:34:27.730 00:34:27.730 ' 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:27.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.730 --rc genhtml_branch_coverage=1 00:34:27.730 --rc genhtml_function_coverage=1 00:34:27.730 --rc genhtml_legend=1 00:34:27.730 --rc geninfo_all_blocks=1 00:34:27.730 --rc geninfo_unexecuted_blocks=1 00:34:27.730 00:34:27.730 ' 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:27.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.730 --rc genhtml_branch_coverage=1 00:34:27.730 --rc genhtml_function_coverage=1 00:34:27.730 --rc genhtml_legend=1 00:34:27.730 --rc geninfo_all_blocks=1 00:34:27.730 --rc geninfo_unexecuted_blocks=1 00:34:27.730 00:34:27.730 ' 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:27.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.730 --rc genhtml_branch_coverage=1 00:34:27.730 --rc genhtml_function_coverage=1 00:34:27.730 --rc genhtml_legend=1 00:34:27.730 --rc geninfo_all_blocks=1 00:34:27.730 --rc geninfo_unexecuted_blocks=1 00:34:27.730 00:34:27.730 ' 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.730 12:09:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # : 0 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # '[' 1 -eq 1 ']' 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@35 -- # NVMF_APP+=(--interrupt-mode) 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@56 -- # have_pci_nics=0 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@310 -- # xtrace_disable 00:34:27.731 12:09:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_devs=() 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_devs 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_net_devs=() 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@318 -- # pci_drivers=() 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@318 -- # local -A pci_drivers 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # net_devs=() 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga net_devs 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # e810=() 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga e810 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # x722=() 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga x722 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@323 -- # mlx=() 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@323 -- # local -ga mlx 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:35.880 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:35.880 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:35.880 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:35.880 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # is_hw=yes 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:34:35.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:35.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:34:35.880 00:34:35.880 --- 10.0.0.2 ping statistics --- 00:34:35.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.880 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:35.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:35.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:34:35.880 00:34:35.880 --- 10.0.0.1 ping statistics --- 00:34:35.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.880 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:35.880 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # return 0 00:34:35.881 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:35.881 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:35.881 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:35.881 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:35.881 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:35.881 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:35.881 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:35.881 12:09:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:35.881 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:35.881 12:09:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:35.881 12:09:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:35.881 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # nvmfpid=329522 00:34:35.881 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # waitforlisten 329522 00:34:35.881 12:09:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:35.881 12:09:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 329522 ']' 00:34:35.881 12:09:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:35.881 12:09:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:35.881 12:09:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:35.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:35.881 12:09:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:35.881 12:09:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:35.881 [2024-12-09 12:09:42.896453] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:35.881 [2024-12-09 12:09:42.897618] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:34:35.881 [2024-12-09 12:09:42.897686] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:35.881 [2024-12-09 12:09:42.996209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:35.881 [2024-12-09 12:09:43.048383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:35.881 [2024-12-09 12:09:43.048433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:35.881 [2024-12-09 12:09:43.048442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:35.881 [2024-12-09 12:09:43.048449] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:35.881 [2024-12-09 12:09:43.048455] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:35.881 [2024-12-09 12:09:43.050167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.881 [2024-12-09 12:09:43.050173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.881 [2024-12-09 12:09:43.131300] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:35.881 [2024-12-09 12:09:43.131896] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:35.881 [2024-12-09 12:09:43.132199] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:35.881 12:09:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:35.881 12:09:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:34:35.881 12:09:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:35.881 12:09:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:35.881 12:09:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:35.881 12:09:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:35.881 12:09:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:35.881 12:09:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:35.881 12:09:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:35.881 12:09:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:36.142 5000+0 records in 00:34:36.142 5000+0 records out 00:34:36.142 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0193014 s, 531 MB/s 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:36.142 AIO0 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:36.142 [2024-12-09 12:09:43.807152] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:36.142 [2024-12-09 12:09:43.839572] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 329522 0 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 329522 0 idle 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=329522 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 329522 -w 256 00:34:36.142 12:09:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:36.142 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 329522 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.32 reactor_0' 00:34:36.142 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 329522 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.32 reactor_0 00:34:36.142 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:36.142 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:36.403 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:36.403 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:36.403 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:36.403 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:36.403 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:36.403 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 329522 1 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 329522 1 idle 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=329522 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 329522 -w 256 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 329535 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 329535 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=329895 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 329522 0 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 329522 0 busy 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=329522 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:36.404 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 329522 -w 256 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 329522 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.53 reactor_0' 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 329522 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.53 reactor_0 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 329522 1 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 329522 1 busy 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=329522 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 329522 -w 256 00:34:36.665 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:36.926 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 329535 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.30 reactor_1' 00:34:36.926 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 329535 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.30 reactor_1 00:34:36.926 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:36.926 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:36.926 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:36.926 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:36.926 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:36.926 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:36.926 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:36.926 12:09:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:36.926 12:09:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 329895 00:34:47.142 Initializing NVMe Controllers 00:34:47.142 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:47.142 Controller IO queue size 256, less than required. 00:34:47.142 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:47.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:47.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:47.142 Initialization complete. Launching workers. 00:34:47.142 ======================================================== 00:34:47.142 Latency(us) 00:34:47.142 Device Information : IOPS MiB/s Average min max 00:34:47.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 18524.92 72.36 13824.48 3044.48 32990.18 00:34:47.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19379.71 75.70 13212.48 7308.86 51606.48 00:34:47.142 ======================================================== 00:34:47.142 Total : 37904.63 148.06 13511.58 3044.48 51606.48 00:34:47.142 00:34:47.142 12:09:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:47.142 12:09:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 329522 0 00:34:47.142 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 329522 0 idle 00:34:47.142 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=329522 00:34:47.142 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:47.142 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:47.142 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:47.142 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:47.142 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:47.142 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:47.142 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:47.142 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:47.142 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:47.142 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 329522 -w 256 00:34:47.142 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:47.142 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 329522 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.32 reactor_0' 00:34:47.142 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 329522 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.32 reactor_0 00:34:47.142 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:47.142 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:47.142 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:47.142 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:47.142 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 329522 1 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 329522 1 idle 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=329522 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 329522 -w 256 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 329535 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1' 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 329535 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:47.143 12:09:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:47.714 12:09:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:34:47.714 12:09:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:34:47.714 12:09:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:47.714 12:09:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:47.714 12:09:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:34:49.626 12:09:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:49.626 12:09:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:49.626 12:09:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:49.626 12:09:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:49.626 12:09:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:49.626 12:09:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:34:49.626 12:09:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:49.626 12:09:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 329522 0 00:34:49.626 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 329522 0 idle 00:34:49.626 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=329522 00:34:49.626 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:49.626 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:49.626 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:49.626 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:49.626 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:49.626 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:49.626 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:49.626 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:49.626 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 329522 -w 256 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 329522 root 20 0 128.2g 79488 32256 S 6.2 0.1 0:20.70 reactor_0' 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 329522 root 20 0 128.2g 79488 32256 S 6.2 0.1 0:20.70 reactor_0 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 329522 1 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 329522 1 idle 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=329522 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 329522 -w 256 00:34:49.886 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:50.148 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 329535 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1' 00:34:50.148 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 329535 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1 00:34:50.148 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:50.148 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:50.148 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:50.148 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:50.148 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:50.148 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:50.148 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:50.148 12:09:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:50.148 12:09:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:50.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:50.148 12:09:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:50.148 12:09:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:34:50.148 12:09:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:50.148 12:09:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:50.148 12:09:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:50.148 12:09:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:50.148 12:09:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:34:50.148 12:09:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:34:50.148 12:09:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:34:50.148 12:09:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:50.148 12:09:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@122 -- # sync 00:34:50.148 12:09:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:34:50.148 12:09:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # set +e 00:34:50.410 12:09:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # for i in {1..20} 00:34:50.410 12:09:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:34:50.410 rmmod nvme_tcp 00:34:50.410 rmmod nvme_fabrics 00:34:50.410 rmmod nvme_keyring 00:34:50.410 12:09:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:34:50.410 12:09:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # set -e 00:34:50.410 12:09:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@130 -- # return 0 00:34:50.410 12:09:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@513 -- # '[' -n 329522 ']' 00:34:50.410 12:09:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # killprocess 329522 00:34:50.410 12:09:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 329522 ']' 00:34:50.410 12:09:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 329522 00:34:50.410 12:09:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:34:50.410 12:09:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:50.410 12:09:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 329522 00:34:50.410 12:09:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:50.410 12:09:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:50.410 12:09:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 329522' 00:34:50.410 killing process with pid 329522 00:34:50.410 12:09:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 329522 00:34:50.410 12:09:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 329522 00:34:50.671 12:09:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:50.671 12:09:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:50.671 12:09:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:50.671 12:09:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # iptr 00:34:50.671 12:09:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-save 00:34:50.671 12:09:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:50.671 12:09:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-restore 00:34:50.671 12:09:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:50.671 12:09:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # remove_spdk_ns 00:34:50.671 12:09:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:50.671 12:09:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:50.671 12:09:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:52.702 12:10:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:34:52.702 00:34:52.702 real 0m25.198s 00:34:52.702 user 0m39.883s 00:34:52.702 sys 0m10.088s 00:34:52.702 12:10:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:52.702 12:10:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:52.702 ************************************ 00:34:52.702 END TEST nvmf_interrupt 00:34:52.702 ************************************ 00:34:52.702 00:34:52.702 real 29m49.133s 00:34:52.702 user 61m15.588s 00:34:52.702 sys 10m9.531s 00:34:52.702 12:10:00 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:52.702 12:10:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:52.702 ************************************ 00:34:52.702 END TEST nvmf_tcp 00:34:52.702 ************************************ 00:34:52.702 12:10:00 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:34:52.702 12:10:00 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:52.702 12:10:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:52.702 12:10:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:52.702 12:10:00 -- common/autotest_common.sh@10 -- # set +x 00:34:52.702 ************************************ 00:34:52.702 START TEST spdkcli_nvmf_tcp 00:34:52.702 ************************************ 00:34:52.702 12:10:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:53.016 * Looking for test storage... 00:34:53.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:34:53.016 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:53.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.017 --rc genhtml_branch_coverage=1 00:34:53.017 --rc genhtml_function_coverage=1 00:34:53.017 --rc genhtml_legend=1 00:34:53.017 --rc geninfo_all_blocks=1 00:34:53.017 --rc geninfo_unexecuted_blocks=1 00:34:53.017 00:34:53.017 ' 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:53.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.017 --rc genhtml_branch_coverage=1 00:34:53.017 --rc genhtml_function_coverage=1 00:34:53.017 --rc genhtml_legend=1 00:34:53.017 --rc geninfo_all_blocks=1 00:34:53.017 --rc geninfo_unexecuted_blocks=1 00:34:53.017 00:34:53.017 ' 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:53.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.017 --rc genhtml_branch_coverage=1 00:34:53.017 --rc genhtml_function_coverage=1 00:34:53.017 --rc genhtml_legend=1 00:34:53.017 --rc geninfo_all_blocks=1 00:34:53.017 --rc geninfo_unexecuted_blocks=1 00:34:53.017 00:34:53.017 ' 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:53.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.017 --rc genhtml_branch_coverage=1 00:34:53.017 --rc genhtml_function_coverage=1 00:34:53.017 --rc genhtml_legend=1 00:34:53.017 --rc geninfo_all_blocks=1 00:34:53.017 --rc geninfo_unexecuted_blocks=1 00:34:53.017 00:34:53.017 ' 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # : 0 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:34:53.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- nvmf/common.sh@56 -- # have_pci_nics=0 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=333080 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 333080 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 333080 ']' 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:53.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:53.017 12:10:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:53.017 [2024-12-09 12:10:00.800494] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:34:53.017 [2024-12-09 12:10:00.800547] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid333080 ] 00:34:53.278 [2024-12-09 12:10:00.889889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:53.278 [2024-12-09 12:10:00.938164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:53.278 [2024-12-09 12:10:00.938168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:53.849 12:10:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:53.849 12:10:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:34:53.849 12:10:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:53.849 12:10:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:53.849 12:10:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:53.849 12:10:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:53.849 12:10:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:53.849 12:10:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:53.849 12:10:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:53.849 12:10:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:53.849 12:10:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:53.849 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:53.849 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:53.849 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:53.849 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:53.849 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:53.849 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:53.849 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:53.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:53.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:53.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:53.849 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:53.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:53.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:53.849 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:53.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:53.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:53.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:53.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:53.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:53.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:53.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:53.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:53.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:53.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:53.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:53.849 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:53.849 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:53.849 ' 00:34:56.394 [2024-12-09 12:10:04.126423] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:57.775 [2024-12-09 12:10:05.334335] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:59.685 [2024-12-09 12:10:07.552508] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:01.595 [2024-12-09 12:10:09.461891] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:03.506 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:03.506 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:03.506 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:03.506 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:03.506 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:03.506 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:03.506 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:03.506 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:03.506 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:03.506 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:03.506 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:03.506 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:03.506 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:03.506 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:03.506 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:03.506 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:03.506 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:03.506 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:03.506 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:03.506 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:03.506 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:03.506 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:03.506 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:03.506 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:03.506 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:03.506 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:03.506 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:03.506 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:03.506 12:10:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:03.506 12:10:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:03.506 12:10:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:03.506 12:10:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:03.506 12:10:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:03.506 12:10:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:03.506 12:10:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:03.506 12:10:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:03.766 12:10:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:03.766 12:10:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:03.766 12:10:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:03.766 12:10:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:03.766 12:10:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:04.027 12:10:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:04.027 12:10:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:04.027 12:10:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:04.027 12:10:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:04.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:04.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:04.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:04.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:04.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:04.027 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:04.027 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:04.027 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:04.027 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:04.027 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:04.027 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:04.027 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:04.027 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:04.027 ' 00:35:09.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:09.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:09.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:09.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:09.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:09.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:09.310 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:09.311 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:09.311 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:09.311 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:09.311 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:09.311 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:09.311 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:09.311 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 333080 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 333080 ']' 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 333080 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 333080 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 333080' 00:35:09.311 killing process with pid 333080 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 333080 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 333080 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 333080 ']' 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 333080 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 333080 ']' 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 333080 00:35:09.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (333080) - No such process 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 333080 is not found' 00:35:09.311 Process with pid 333080 is not found 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:09.311 00:35:09.311 real 0m16.436s 00:35:09.311 user 0m34.233s 00:35:09.311 sys 0m0.765s 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:09.311 12:10:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:09.311 ************************************ 00:35:09.311 END TEST spdkcli_nvmf_tcp 00:35:09.311 ************************************ 00:35:09.311 12:10:16 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:09.311 12:10:16 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:09.311 12:10:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:09.311 12:10:16 -- common/autotest_common.sh@10 -- # set +x 00:35:09.311 ************************************ 00:35:09.311 START TEST nvmf_identify_passthru 00:35:09.311 ************************************ 00:35:09.311 12:10:17 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:09.311 * Looking for test storage... 00:35:09.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:09.311 12:10:17 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:09.311 12:10:17 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:35:09.311 12:10:17 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:09.311 12:10:17 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:09.311 12:10:17 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:09.311 12:10:17 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:09.311 12:10:17 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:09.311 12:10:17 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:09.311 12:10:17 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:09.311 12:10:17 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:09.311 12:10:17 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:09.311 12:10:17 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:09.311 12:10:17 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:09.311 12:10:17 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:09.311 12:10:17 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:09.311 12:10:17 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:09.311 12:10:17 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:09.311 12:10:17 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:09.311 12:10:17 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:09.311 12:10:17 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:09.572 12:10:17 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:09.572 12:10:17 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:09.572 12:10:17 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:09.572 12:10:17 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:09.572 12:10:17 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:09.572 12:10:17 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:09.572 12:10:17 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:09.572 12:10:17 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:09.572 12:10:17 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:09.572 12:10:17 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:09.572 12:10:17 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:09.572 12:10:17 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:09.572 12:10:17 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:09.572 12:10:17 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:09.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.572 --rc genhtml_branch_coverage=1 00:35:09.572 --rc genhtml_function_coverage=1 00:35:09.572 --rc genhtml_legend=1 00:35:09.572 --rc geninfo_all_blocks=1 00:35:09.572 --rc geninfo_unexecuted_blocks=1 00:35:09.572 00:35:09.572 ' 00:35:09.572 12:10:17 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:09.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.572 --rc genhtml_branch_coverage=1 00:35:09.572 --rc genhtml_function_coverage=1 00:35:09.572 --rc genhtml_legend=1 00:35:09.572 --rc geninfo_all_blocks=1 00:35:09.572 --rc geninfo_unexecuted_blocks=1 00:35:09.572 00:35:09.572 ' 00:35:09.572 12:10:17 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:09.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.572 --rc genhtml_branch_coverage=1 00:35:09.572 --rc genhtml_function_coverage=1 00:35:09.572 --rc genhtml_legend=1 00:35:09.572 --rc geninfo_all_blocks=1 00:35:09.572 --rc geninfo_unexecuted_blocks=1 00:35:09.572 00:35:09.572 ' 00:35:09.572 12:10:17 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:09.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.572 --rc genhtml_branch_coverage=1 00:35:09.572 --rc genhtml_function_coverage=1 00:35:09.572 --rc genhtml_legend=1 00:35:09.572 --rc geninfo_all_blocks=1 00:35:09.572 --rc geninfo_unexecuted_blocks=1 00:35:09.572 00:35:09.572 ' 00:35:09.573 12:10:17 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:09.573 12:10:17 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:09.573 12:10:17 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:09.573 12:10:17 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:09.573 12:10:17 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:09.573 12:10:17 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.573 12:10:17 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.573 12:10:17 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.573 12:10:17 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:09.573 12:10:17 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@52 -- # : 0 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:35:09.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@56 -- # have_pci_nics=0 00:35:09.573 12:10:17 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:09.573 12:10:17 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:09.573 12:10:17 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:09.573 12:10:17 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:09.573 12:10:17 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:09.573 12:10:17 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.573 12:10:17 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.573 12:10:17 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.573 12:10:17 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:09.573 12:10:17 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.573 12:10:17 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:09.573 12:10:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:09.573 12:10:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:09.573 12:10:17 nvmf_identify_passthru -- nvmf/common.sh@310 -- # xtrace_disable 00:35:09.573 12:10:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_devs=() 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_devs 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_net_devs=() 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@318 -- # pci_drivers=() 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@318 -- # local -A pci_drivers 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@320 -- # net_devs=() 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga net_devs 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@321 -- # e810=() 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga e810 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@322 -- # x722=() 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga x722 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@323 -- # mlx=() 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@323 -- # local -ga mlx 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:17.713 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:17.713 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:17.713 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:17.713 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@438 -- # is_hw=yes 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:35:17.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:17.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:35:17.713 00:35:17.713 --- 10.0.0.2 ping statistics --- 00:35:17.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.713 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:17.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:17.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:35:17.713 00:35:17.713 --- 10.0.0.1 ping statistics --- 00:35:17.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.713 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@446 -- # return 0 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:17.713 12:10:24 nvmf_identify_passthru -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:17.713 12:10:24 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:17.713 12:10:24 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:17.713 12:10:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.713 12:10:24 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:17.713 12:10:24 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:35:17.713 12:10:24 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:35:17.713 12:10:24 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:35:17.713 12:10:24 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:35:17.713 12:10:24 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:35:17.713 12:10:24 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:35:17.713 12:10:24 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:17.713 12:10:24 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:17.713 12:10:24 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:35:17.713 12:10:24 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:35:17.713 12:10:24 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:35:17.713 12:10:24 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:35:17.713 12:10:24 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:17.713 12:10:24 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:17.713 12:10:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:17.713 12:10:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:17.713 12:10:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:17.713 12:10:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:35:17.713 12:10:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:17.713 12:10:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:17.713 12:10:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:17.974 12:10:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:35:17.974 12:10:25 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:17.974 12:10:25 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:17.974 12:10:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.974 12:10:25 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:17.974 12:10:25 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:17.974 12:10:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.974 12:10:25 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=340159 00:35:17.974 12:10:25 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:17.974 12:10:25 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:17.974 12:10:25 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 340159 00:35:17.974 12:10:25 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 340159 ']' 00:35:17.974 12:10:25 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:17.974 12:10:25 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:17.974 12:10:25 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:17.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:17.974 12:10:25 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:17.974 12:10:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:17.974 [2024-12-09 12:10:25.763042] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:35:17.974 [2024-12-09 12:10:25.763113] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:17.974 [2024-12-09 12:10:25.843754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:18.234 [2024-12-09 12:10:25.897739] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:18.234 [2024-12-09 12:10:25.897794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:18.234 [2024-12-09 12:10:25.897803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:18.234 [2024-12-09 12:10:25.897811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:18.234 [2024-12-09 12:10:25.897817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:18.234 [2024-12-09 12:10:25.899878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:18.234 [2024-12-09 12:10:25.900007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:18.234 [2024-12-09 12:10:25.900174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:18.234 [2024-12-09 12:10:25.900176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:18.805 12:10:26 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:18.805 12:10:26 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:35:18.805 12:10:26 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:18.805 12:10:26 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.805 12:10:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.805 INFO: Log level set to 20 00:35:18.805 INFO: Requests: 00:35:18.805 { 00:35:18.805 "jsonrpc": "2.0", 00:35:18.805 "method": "nvmf_set_config", 00:35:18.805 "id": 1, 00:35:18.805 "params": { 00:35:18.805 "admin_cmd_passthru": { 00:35:18.805 "identify_ctrlr": true 00:35:18.805 } 00:35:18.805 } 00:35:18.805 } 00:35:18.805 00:35:18.805 INFO: response: 00:35:18.805 { 00:35:18.805 "jsonrpc": "2.0", 00:35:18.805 "id": 1, 00:35:18.805 "result": true 00:35:18.805 } 00:35:18.805 00:35:18.805 12:10:26 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.805 12:10:26 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:18.805 12:10:26 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.805 12:10:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.805 INFO: Setting log level to 20 00:35:18.805 INFO: Setting log level to 20 00:35:18.805 INFO: Log level set to 20 00:35:18.805 INFO: Log level set to 20 00:35:18.805 INFO: Requests: 00:35:18.805 { 00:35:18.805 "jsonrpc": "2.0", 00:35:18.805 "method": "framework_start_init", 00:35:18.805 "id": 1 00:35:18.805 } 00:35:18.805 00:35:18.805 INFO: Requests: 00:35:18.805 { 00:35:18.805 "jsonrpc": "2.0", 00:35:18.805 "method": "framework_start_init", 00:35:18.805 "id": 1 00:35:18.805 } 00:35:18.805 00:35:18.805 [2024-12-09 12:10:26.649467] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:18.805 INFO: response: 00:35:18.805 { 00:35:18.805 "jsonrpc": "2.0", 00:35:18.805 "id": 1, 00:35:18.805 "result": true 00:35:18.805 } 00:35:18.805 00:35:18.805 INFO: response: 00:35:18.805 { 00:35:18.805 "jsonrpc": "2.0", 00:35:18.805 "id": 1, 00:35:18.805 "result": true 00:35:18.805 } 00:35:18.805 00:35:18.805 12:10:26 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.805 12:10:26 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:18.805 12:10:26 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.805 12:10:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.805 INFO: Setting log level to 40 00:35:18.805 INFO: Setting log level to 40 00:35:18.805 INFO: Setting log level to 40 00:35:18.805 [2024-12-09 12:10:26.662796] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:18.805 12:10:26 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.805 12:10:26 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:18.805 12:10:26 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:18.805 12:10:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.066 12:10:26 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:35:19.066 12:10:26 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.066 12:10:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.327 Nvme0n1 00:35:19.327 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.327 12:10:27 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:19.327 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.327 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.327 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.327 12:10:27 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:19.327 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.327 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.327 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.327 12:10:27 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:19.327 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.327 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.327 [2024-12-09 12:10:27.057746] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:19.327 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.327 12:10:27 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:19.327 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.327 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.327 [ 00:35:19.327 { 00:35:19.327 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:19.327 "subtype": "Discovery", 00:35:19.327 "listen_addresses": [], 00:35:19.327 "allow_any_host": true, 00:35:19.327 "hosts": [] 00:35:19.327 }, 00:35:19.327 { 00:35:19.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:19.327 "subtype": "NVMe", 00:35:19.327 "listen_addresses": [ 00:35:19.327 { 00:35:19.327 "trtype": "TCP", 00:35:19.327 "adrfam": "IPv4", 00:35:19.327 "traddr": "10.0.0.2", 00:35:19.327 "trsvcid": "4420" 00:35:19.327 } 00:35:19.327 ], 00:35:19.327 "allow_any_host": true, 00:35:19.327 "hosts": [], 00:35:19.327 "serial_number": "SPDK00000000000001", 00:35:19.327 "model_number": "SPDK bdev Controller", 00:35:19.327 "max_namespaces": 1, 00:35:19.327 "min_cntlid": 1, 00:35:19.327 "max_cntlid": 65519, 00:35:19.327 "namespaces": [ 00:35:19.327 { 00:35:19.327 "nsid": 1, 00:35:19.327 "bdev_name": "Nvme0n1", 00:35:19.327 "name": "Nvme0n1", 00:35:19.327 "nguid": "36344730526054870025384500000044", 00:35:19.327 "uuid": "36344730-5260-5487-0025-384500000044" 00:35:19.327 } 00:35:19.327 ] 00:35:19.327 } 00:35:19.327 ] 00:35:19.327 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.327 12:10:27 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:19.327 12:10:27 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:19.327 12:10:27 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:19.587 12:10:27 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:35:19.587 12:10:27 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:19.587 12:10:27 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:19.587 12:10:27 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:19.848 12:10:27 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:35:19.848 12:10:27 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:35:19.848 12:10:27 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:35:19.848 12:10:27 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:19.848 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.848 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.848 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.848 12:10:27 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:19.848 12:10:27 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:19.848 12:10:27 nvmf_identify_passthru -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:19.848 12:10:27 nvmf_identify_passthru -- nvmf/common.sh@122 -- # sync 00:35:19.848 12:10:27 nvmf_identify_passthru -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:35:19.848 12:10:27 nvmf_identify_passthru -- nvmf/common.sh@125 -- # set +e 00:35:19.848 12:10:27 nvmf_identify_passthru -- nvmf/common.sh@126 -- # for i in {1..20} 00:35:19.848 12:10:27 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:35:19.848 rmmod nvme_tcp 00:35:19.848 rmmod nvme_fabrics 00:35:19.848 rmmod nvme_keyring 00:35:19.848 12:10:27 nvmf_identify_passthru -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:35:19.848 12:10:27 nvmf_identify_passthru -- nvmf/common.sh@129 -- # set -e 00:35:19.848 12:10:27 nvmf_identify_passthru -- nvmf/common.sh@130 -- # return 0 00:35:19.848 12:10:27 nvmf_identify_passthru -- nvmf/common.sh@513 -- # '[' -n 340159 ']' 00:35:19.848 12:10:27 nvmf_identify_passthru -- nvmf/common.sh@514 -- # killprocess 340159 00:35:19.848 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 340159 ']' 00:35:19.848 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 340159 00:35:19.848 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:35:19.848 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:19.848 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 340159 00:35:19.848 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:19.848 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:19.848 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 340159' 00:35:19.848 killing process with pid 340159 00:35:19.848 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 340159 00:35:19.848 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 340159 00:35:20.108 12:10:27 nvmf_identify_passthru -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:20.108 12:10:27 nvmf_identify_passthru -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:20.108 12:10:27 nvmf_identify_passthru -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:20.108 12:10:27 nvmf_identify_passthru -- nvmf/common.sh@298 -- # iptr 00:35:20.108 12:10:27 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-save 00:35:20.108 12:10:27 nvmf_identify_passthru -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:20.108 12:10:27 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-restore 00:35:20.108 12:10:27 nvmf_identify_passthru -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:20.108 12:10:27 nvmf_identify_passthru -- nvmf/common.sh@303 -- # remove_spdk_ns 00:35:20.108 12:10:27 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:20.108 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:20.108 12:10:27 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:22.650 12:10:30 nvmf_identify_passthru -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:35:22.650 00:35:22.650 real 0m13.003s 00:35:22.650 user 0m10.335s 00:35:22.650 sys 0m6.601s 00:35:22.650 12:10:30 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:22.650 12:10:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:22.650 ************************************ 00:35:22.650 END TEST nvmf_identify_passthru 00:35:22.650 ************************************ 00:35:22.650 12:10:30 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:22.650 12:10:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:22.650 12:10:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:22.650 12:10:30 -- common/autotest_common.sh@10 -- # set +x 00:35:22.650 ************************************ 00:35:22.650 START TEST nvmf_dif 00:35:22.650 ************************************ 00:35:22.650 12:10:30 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:22.650 * Looking for test storage... 00:35:22.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:22.650 12:10:30 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:22.650 12:10:30 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:35:22.650 12:10:30 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:22.650 12:10:30 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:22.650 12:10:30 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:22.651 12:10:30 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:22.651 12:10:30 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:22.651 12:10:30 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:22.651 12:10:30 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:22.651 12:10:30 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:22.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:22.651 --rc genhtml_branch_coverage=1 00:35:22.651 --rc genhtml_function_coverage=1 00:35:22.651 --rc genhtml_legend=1 00:35:22.651 --rc geninfo_all_blocks=1 00:35:22.651 --rc geninfo_unexecuted_blocks=1 00:35:22.651 00:35:22.651 ' 00:35:22.651 12:10:30 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:22.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:22.651 --rc genhtml_branch_coverage=1 00:35:22.651 --rc genhtml_function_coverage=1 00:35:22.651 --rc genhtml_legend=1 00:35:22.651 --rc geninfo_all_blocks=1 00:35:22.651 --rc geninfo_unexecuted_blocks=1 00:35:22.651 00:35:22.651 ' 00:35:22.651 12:10:30 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:22.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:22.651 --rc genhtml_branch_coverage=1 00:35:22.651 --rc genhtml_function_coverage=1 00:35:22.651 --rc genhtml_legend=1 00:35:22.651 --rc geninfo_all_blocks=1 00:35:22.651 --rc geninfo_unexecuted_blocks=1 00:35:22.651 00:35:22.651 ' 00:35:22.651 12:10:30 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:22.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:22.651 --rc genhtml_branch_coverage=1 00:35:22.651 --rc genhtml_function_coverage=1 00:35:22.651 --rc genhtml_legend=1 00:35:22.651 --rc geninfo_all_blocks=1 00:35:22.651 --rc geninfo_unexecuted_blocks=1 00:35:22.651 00:35:22.651 ' 00:35:22.651 12:10:30 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:22.651 12:10:30 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:22.651 12:10:30 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:22.651 12:10:30 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:22.651 12:10:30 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:22.651 12:10:30 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.651 12:10:30 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.651 12:10:30 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.651 12:10:30 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:22.651 12:10:30 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@52 -- # : 0 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:35:22.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@56 -- # have_pci_nics=0 00:35:22.651 12:10:30 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:22.651 12:10:30 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:22.651 12:10:30 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:22.651 12:10:30 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:22.651 12:10:30 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:22.651 12:10:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:22.651 12:10:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:22.651 12:10:30 nvmf_dif -- nvmf/common.sh@310 -- # xtrace_disable 00:35:22.651 12:10:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@316 -- # pci_devs=() 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_devs 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@317 -- # pci_net_devs=() 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@318 -- # pci_drivers=() 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@318 -- # local -A pci_drivers 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@320 -- # net_devs=() 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@320 -- # local -ga net_devs 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@321 -- # e810=() 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@321 -- # local -ga e810 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@322 -- # x722=() 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@322 -- # local -ga x722 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@323 -- # mlx=() 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@323 -- # local -ga mlx 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:29.236 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:29.236 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:29.236 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:29.236 12:10:37 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:29.237 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@438 -- # is_hw=yes 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:35:29.237 12:10:37 nvmf_dif -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:29.499 12:10:37 nvmf_dif -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:29.499 12:10:37 nvmf_dif -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:29.499 12:10:37 nvmf_dif -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:35:29.499 12:10:37 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:29.499 12:10:37 nvmf_dif -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:29.499 12:10:37 nvmf_dif -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:29.499 12:10:37 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:29.499 12:10:37 nvmf_dif -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:35:29.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:29.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:35:29.499 00:35:29.499 --- 10.0.0.2 ping statistics --- 00:35:29.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:29.499 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:35:29.499 12:10:37 nvmf_dif -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:29.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:29.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:35:29.499 00:35:29.499 --- 10.0.0.1 ping statistics --- 00:35:29.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:29.499 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:35:29.499 12:10:37 nvmf_dif -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:29.499 12:10:37 nvmf_dif -- nvmf/common.sh@446 -- # return 0 00:35:29.499 12:10:37 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:35:29.499 12:10:37 nvmf_dif -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:32.805 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:32.805 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:32.805 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:32.805 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:32.805 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:32.805 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:32.805 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:32.805 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:32.805 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:32.805 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:35:32.805 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:32.805 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:32.805 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:32.805 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:32.805 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:32.805 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:32.805 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:33.378 12:10:40 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:33.378 12:10:40 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:33.378 12:10:40 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:33.378 12:10:40 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:33.378 12:10:40 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:33.378 12:10:40 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:33.378 12:10:41 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:33.378 12:10:41 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:33.379 12:10:41 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:33.379 12:10:41 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:33.379 12:10:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:33.379 12:10:41 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=346233 00:35:33.379 12:10:41 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 346233 00:35:33.379 12:10:41 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:33.379 12:10:41 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 346233 ']' 00:35:33.379 12:10:41 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:33.379 12:10:41 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:33.379 12:10:41 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:33.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:33.379 12:10:41 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:33.379 12:10:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:33.379 [2024-12-09 12:10:41.092212] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:35:33.379 [2024-12-09 12:10:41.092278] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:33.379 [2024-12-09 12:10:41.189416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:33.379 [2024-12-09 12:10:41.241050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:33.379 [2024-12-09 12:10:41.241105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:33.379 [2024-12-09 12:10:41.241114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:33.379 [2024-12-09 12:10:41.241121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:33.379 [2024-12-09 12:10:41.241128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:33.379 [2024-12-09 12:10:41.241980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:34.322 12:10:41 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:34.322 12:10:41 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:35:34.322 12:10:41 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:34.322 12:10:41 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:34.322 12:10:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:34.322 12:10:41 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:34.322 12:10:41 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:34.322 12:10:41 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:34.322 12:10:41 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.322 12:10:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:34.322 [2024-12-09 12:10:41.930189] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:34.322 12:10:41 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.322 12:10:41 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:34.322 12:10:41 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:34.322 12:10:41 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:34.322 12:10:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:34.322 ************************************ 00:35:34.322 START TEST fio_dif_1_default 00:35:34.322 ************************************ 00:35:34.322 12:10:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:35:34.322 12:10:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:34.322 12:10:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:34.322 12:10:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:34.322 12:10:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:34.322 12:10:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:34.322 12:10:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:34.322 12:10:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.322 12:10:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:34.322 bdev_null0 00:35:34.322 12:10:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.322 12:10:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:34.322 12:10:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.322 12:10:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:34.322 12:10:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.322 12:10:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:34.322 12:10:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.322 12:10:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:34.322 [2024-12-09 12:10:42.018537] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:35:34.322 { 00:35:34.322 "params": { 00:35:34.322 "name": "Nvme$subsystem", 00:35:34.322 "trtype": "$TEST_TRANSPORT", 00:35:34.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:34.322 "adrfam": "ipv4", 00:35:34.322 "trsvcid": "$NVMF_PORT", 00:35:34.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:34.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:34.322 "hdgst": ${hdgst:-false}, 00:35:34.322 "ddgst": ${ddgst:-false} 00:35:34.322 }, 00:35:34.322 "method": "bdev_nvme_attach_controller" 00:35:34.322 } 00:35:34.322 EOF 00:35:34.322 )") 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:35:34.322 "params": { 00:35:34.322 "name": "Nvme0", 00:35:34.322 "trtype": "tcp", 00:35:34.322 "traddr": "10.0.0.2", 00:35:34.322 "adrfam": "ipv4", 00:35:34.322 "trsvcid": "4420", 00:35:34.322 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:34.322 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:34.322 "hdgst": false, 00:35:34.322 "ddgst": false 00:35:34.322 }, 00:35:34.322 "method": "bdev_nvme_attach_controller" 00:35:34.322 }' 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:34.322 12:10:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:34.891 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:34.891 fio-3.35 00:35:34.891 Starting 1 thread 00:35:47.124 00:35:47.124 filename0: (groupid=0, jobs=1): err= 0: pid=346791: Mon Dec 9 12:10:53 2024 00:35:47.124 read: IOPS=96, BW=388KiB/s (397kB/s)(3888KiB/10030msec) 00:35:47.124 slat (nsec): min=5481, max=56887, avg=6479.20, stdev=2389.10 00:35:47.124 clat (usec): min=40903, max=44031, avg=41254.16, stdev=566.99 00:35:47.124 lat (usec): min=40908, max=44074, avg=41260.64, stdev=568.12 00:35:47.124 clat percentiles (usec): 00:35:47.124 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:47.124 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:47.124 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42730], 00:35:47.124 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:35:47.124 | 99.99th=[43779] 00:35:47.124 bw ( KiB/s): min= 352, max= 416, per=99.84%, avg=387.20, stdev=14.31, samples=20 00:35:47.124 iops : min= 88, max= 104, avg=96.80, stdev= 3.58, samples=20 00:35:47.124 lat (msec) : 50=100.00% 00:35:47.124 cpu : usr=93.42%, sys=6.34%, ctx=14, majf=0, minf=247 00:35:47.124 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:47.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.124 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.124 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:47.124 00:35:47.124 Run status group 0 (all jobs): 00:35:47.124 READ: bw=388KiB/s (397kB/s), 388KiB/s-388KiB/s (397kB/s-397kB/s), io=3888KiB (3981kB), run=10030-10030msec 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.124 00:35:47.124 real 0m11.291s 00:35:47.124 user 0m27.967s 00:35:47.124 sys 0m1.044s 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:47.124 ************************************ 00:35:47.124 END TEST fio_dif_1_default 00:35:47.124 ************************************ 00:35:47.124 12:10:53 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:47.124 12:10:53 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:47.124 12:10:53 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:47.124 12:10:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:47.124 ************************************ 00:35:47.124 START TEST fio_dif_1_multi_subsystems 00:35:47.124 ************************************ 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:47.124 bdev_null0 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.124 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:47.124 [2024-12-09 12:10:53.388174] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:47.125 bdev_null1 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:35:47.125 { 00:35:47.125 "params": { 00:35:47.125 "name": "Nvme$subsystem", 00:35:47.125 "trtype": "$TEST_TRANSPORT", 00:35:47.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:47.125 "adrfam": "ipv4", 00:35:47.125 "trsvcid": "$NVMF_PORT", 00:35:47.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:47.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:47.125 "hdgst": ${hdgst:-false}, 00:35:47.125 "ddgst": ${ddgst:-false} 00:35:47.125 }, 00:35:47.125 "method": "bdev_nvme_attach_controller" 00:35:47.125 } 00:35:47.125 EOF 00:35:47.125 )") 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:35:47.125 { 00:35:47.125 "params": { 00:35:47.125 "name": "Nvme$subsystem", 00:35:47.125 "trtype": "$TEST_TRANSPORT", 00:35:47.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:47.125 "adrfam": "ipv4", 00:35:47.125 "trsvcid": "$NVMF_PORT", 00:35:47.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:47.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:47.125 "hdgst": ${hdgst:-false}, 00:35:47.125 "ddgst": ${ddgst:-false} 00:35:47.125 }, 00:35:47.125 "method": "bdev_nvme_attach_controller" 00:35:47.125 } 00:35:47.125 EOF 00:35:47.125 )") 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:35:47.125 "params": { 00:35:47.125 "name": "Nvme0", 00:35:47.125 "trtype": "tcp", 00:35:47.125 "traddr": "10.0.0.2", 00:35:47.125 "adrfam": "ipv4", 00:35:47.125 "trsvcid": "4420", 00:35:47.125 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:47.125 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:47.125 "hdgst": false, 00:35:47.125 "ddgst": false 00:35:47.125 }, 00:35:47.125 "method": "bdev_nvme_attach_controller" 00:35:47.125 },{ 00:35:47.125 "params": { 00:35:47.125 "name": "Nvme1", 00:35:47.125 "trtype": "tcp", 00:35:47.125 "traddr": "10.0.0.2", 00:35:47.125 "adrfam": "ipv4", 00:35:47.125 "trsvcid": "4420", 00:35:47.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:47.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:47.125 "hdgst": false, 00:35:47.125 "ddgst": false 00:35:47.125 }, 00:35:47.125 "method": "bdev_nvme_attach_controller" 00:35:47.125 }' 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:47.125 12:10:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:47.125 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:47.125 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:47.125 fio-3.35 00:35:47.125 Starting 2 threads 00:35:57.124 00:35:57.124 filename0: (groupid=0, jobs=1): err= 0: pid=349063: Mon Dec 9 12:11:04 2024 00:35:57.124 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10005msec) 00:35:57.124 slat (nsec): min=5493, max=23581, avg=6215.44, stdev=1189.49 00:35:57.124 clat (usec): min=678, max=43259, avg=21089.39, stdev=20141.29 00:35:57.124 lat (usec): min=683, max=43283, avg=21095.61, stdev=20141.28 00:35:57.124 clat percentiles (usec): 00:35:57.124 | 1.00th=[ 725], 5.00th=[ 799], 10.00th=[ 832], 20.00th=[ 857], 00:35:57.124 | 30.00th=[ 881], 40.00th=[ 906], 50.00th=[40633], 60.00th=[41157], 00:35:57.124 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:57.124 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:35:57.124 | 99.99th=[43254] 00:35:57.124 bw ( KiB/s): min= 672, max= 768, per=49.87%, avg=756.80, stdev=28.00, samples=20 00:35:57.124 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:35:57.124 lat (usec) : 750=2.32%, 1000=46.20% 00:35:57.124 lat (msec) : 2=1.27%, 50=50.21% 00:35:57.124 cpu : usr=95.31%, sys=4.47%, ctx=14, majf=0, minf=93 00:35:57.124 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:57.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.124 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:57.124 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:57.124 filename1: (groupid=0, jobs=1): err= 0: pid=349064: Mon Dec 9 12:11:04 2024 00:35:57.124 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10003msec) 00:35:57.124 slat (nsec): min=5501, max=27236, avg=6392.59, stdev=1346.85 00:35:57.124 clat (usec): min=506, max=42212, avg=21084.49, stdev=20175.09 00:35:57.124 lat (usec): min=511, max=42221, avg=21090.88, stdev=20175.07 00:35:57.124 clat percentiles (usec): 00:35:57.124 | 1.00th=[ 537], 5.00th=[ 758], 10.00th=[ 807], 20.00th=[ 832], 00:35:57.124 | 30.00th=[ 857], 40.00th=[ 881], 50.00th=[41157], 60.00th=[41157], 00:35:57.124 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:57.124 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:57.124 | 99.99th=[42206] 00:35:57.124 bw ( KiB/s): min= 672, max= 768, per=50.06%, avg=759.58, stdev=25.78, samples=19 00:35:57.124 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:35:57.124 lat (usec) : 750=4.80%, 1000=44.99% 00:35:57.124 lat (msec) : 50=50.21% 00:35:57.124 cpu : usr=94.91%, sys=4.87%, ctx=14, majf=0, minf=163 00:35:57.124 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:57.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.124 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:57.124 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:57.124 00:35:57.124 Run status group 0 (all jobs): 00:35:57.124 READ: bw=1516KiB/s (1552kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=14.8MiB (15.5MB), run=10003-10005msec 00:35:57.124 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:57.124 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:57.124 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:57.124 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:57.124 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:57.124 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:57.124 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.124 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:57.124 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.124 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:57.124 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.124 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:57.124 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.124 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:57.124 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:57.124 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:57.124 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:57.125 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.125 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:57.125 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.125 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:57.125 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.125 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:57.125 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.125 00:35:57.125 real 0m11.450s 00:35:57.125 user 0m36.840s 00:35:57.125 sys 0m1.282s 00:35:57.125 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:57.125 12:11:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:57.125 ************************************ 00:35:57.125 END TEST fio_dif_1_multi_subsystems 00:35:57.125 ************************************ 00:35:57.125 12:11:04 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:57.125 12:11:04 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:57.125 12:11:04 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:57.125 12:11:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:57.125 ************************************ 00:35:57.125 START TEST fio_dif_rand_params 00:35:57.125 ************************************ 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.125 bdev_null0 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.125 [2024-12-09 12:11:04.922144] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:35:57.125 { 00:35:57.125 "params": { 00:35:57.125 "name": "Nvme$subsystem", 00:35:57.125 "trtype": "$TEST_TRANSPORT", 00:35:57.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:57.125 "adrfam": "ipv4", 00:35:57.125 "trsvcid": "$NVMF_PORT", 00:35:57.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:57.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:57.125 "hdgst": ${hdgst:-false}, 00:35:57.125 "ddgst": ${ddgst:-false} 00:35:57.125 }, 00:35:57.125 "method": "bdev_nvme_attach_controller" 00:35:57.125 } 00:35:57.125 EOF 00:35:57.125 )") 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:35:57.125 "params": { 00:35:57.125 "name": "Nvme0", 00:35:57.125 "trtype": "tcp", 00:35:57.125 "traddr": "10.0.0.2", 00:35:57.125 "adrfam": "ipv4", 00:35:57.125 "trsvcid": "4420", 00:35:57.125 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:57.125 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:57.125 "hdgst": false, 00:35:57.125 "ddgst": false 00:35:57.125 }, 00:35:57.125 "method": "bdev_nvme_attach_controller" 00:35:57.125 }' 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:57.125 12:11:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:57.409 12:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:57.409 12:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:57.409 12:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:57.409 12:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:57.671 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:57.671 ... 00:35:57.671 fio-3.35 00:35:57.671 Starting 3 threads 00:36:04.255 00:36:04.255 filename0: (groupid=0, jobs=1): err= 0: pid=351266: Mon Dec 9 12:11:10 2024 00:36:04.255 read: IOPS=305, BW=38.2MiB/s (40.1MB/s)(191MiB/5004msec) 00:36:04.255 slat (nsec): min=5509, max=32293, avg=6550.58, stdev=1623.36 00:36:04.255 clat (usec): min=3409, max=88716, avg=9794.57, stdev=11528.21 00:36:04.255 lat (usec): min=3415, max=88722, avg=9801.12, stdev=11528.28 00:36:04.255 clat percentiles (usec): 00:36:04.255 | 1.00th=[ 3916], 5.00th=[ 4293], 10.00th=[ 4555], 20.00th=[ 5342], 00:36:04.255 | 30.00th=[ 5800], 40.00th=[ 6194], 50.00th=[ 6783], 60.00th=[ 7439], 00:36:04.255 | 70.00th=[ 8029], 80.00th=[ 8717], 90.00th=[10028], 95.00th=[45876], 00:36:04.255 | 99.00th=[47973], 99.50th=[86508], 99.90th=[88605], 99.95th=[88605], 00:36:04.255 | 99.99th=[88605] 00:36:04.255 bw ( KiB/s): min=24320, max=48384, per=41.02%, avg=38855.11, stdev=7953.30, samples=9 00:36:04.255 iops : min= 190, max= 378, avg=303.56, stdev=62.14, samples=9 00:36:04.255 lat (msec) : 4=1.31%, 10=88.50%, 20=3.14%, 50=6.40%, 100=0.65% 00:36:04.255 cpu : usr=92.76%, sys=6.08%, ctx=414, majf=0, minf=124 00:36:04.255 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:04.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.255 issued rwts: total=1531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.255 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:04.255 filename0: (groupid=0, jobs=1): err= 0: pid=351267: Mon Dec 9 12:11:10 2024 00:36:04.255 read: IOPS=157, BW=19.7MiB/s (20.6MB/s)(99.0MiB/5030msec) 00:36:04.255 slat (nsec): min=5546, max=31270, avg=8530.60, stdev=1798.03 00:36:04.255 clat (usec): min=4171, max=90910, avg=19040.46, stdev=19815.57 00:36:04.255 lat (usec): min=4179, max=90933, avg=19048.99, stdev=19815.79 00:36:04.255 clat percentiles (usec): 00:36:04.255 | 1.00th=[ 4883], 5.00th=[ 5604], 10.00th=[ 6521], 20.00th=[ 7373], 00:36:04.255 | 30.00th=[ 8356], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9765], 00:36:04.255 | 70.00th=[10945], 80.00th=[46924], 90.00th=[49021], 95.00th=[50594], 00:36:04.255 | 99.00th=[88605], 99.50th=[89654], 99.90th=[90702], 99.95th=[90702], 00:36:04.255 | 99.99th=[90702] 00:36:04.255 bw ( KiB/s): min=11520, max=30208, per=21.33%, avg=20198.40, stdev=7117.49, samples=10 00:36:04.255 iops : min= 90, max= 236, avg=157.80, stdev=55.61, samples=10 00:36:04.255 lat (msec) : 10=62.37%, 20=14.02%, 50=17.68%, 100=5.93% 00:36:04.255 cpu : usr=95.78%, sys=3.98%, ctx=9, majf=0, minf=82 00:36:04.255 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:04.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.255 issued rwts: total=792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.255 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:04.255 filename0: (groupid=0, jobs=1): err= 0: pid=351268: Mon Dec 9 12:11:10 2024 00:36:04.255 read: IOPS=279, BW=34.9MiB/s (36.6MB/s)(175MiB/5012msec) 00:36:04.255 slat (nsec): min=5543, max=32356, avg=7972.97, stdev=1452.65 00:36:04.255 clat (usec): min=3479, max=90165, avg=10735.08, stdev=13881.26 00:36:04.255 lat (usec): min=3485, max=90174, avg=10743.05, stdev=13881.34 00:36:04.255 clat percentiles (usec): 00:36:04.255 | 1.00th=[ 3785], 5.00th=[ 4146], 10.00th=[ 4490], 20.00th=[ 5342], 00:36:04.255 | 30.00th=[ 5800], 40.00th=[ 6128], 50.00th=[ 6718], 60.00th=[ 7242], 00:36:04.255 | 70.00th=[ 7832], 80.00th=[ 8586], 90.00th=[10159], 95.00th=[46924], 00:36:04.255 | 99.00th=[86508], 99.50th=[88605], 99.90th=[89654], 99.95th=[89654], 00:36:04.255 | 99.99th=[89654] 00:36:04.255 bw ( KiB/s): min=17408, max=49920, per=37.73%, avg=35737.60, stdev=12241.21, samples=10 00:36:04.255 iops : min= 136, max= 390, avg=279.20, stdev=95.63, samples=10 00:36:04.255 lat (msec) : 4=3.43%, 10=86.06%, 20=1.36%, 50=7.79%, 100=1.36% 00:36:04.255 cpu : usr=95.61%, sys=4.15%, ctx=9, majf=0, minf=48 00:36:04.255 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:04.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.255 issued rwts: total=1399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.255 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:04.255 00:36:04.255 Run status group 0 (all jobs): 00:36:04.255 READ: bw=92.5MiB/s (97.0MB/s), 19.7MiB/s-38.2MiB/s (20.6MB/s-40.1MB/s), io=465MiB (488MB), run=5004-5030msec 00:36:04.255 12:11:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.255 bdev_null0 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.255 [2024-12-09 12:11:11.066141] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.255 bdev_null1 00:36:04.255 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.256 bdev_null2 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:36:04.256 { 00:36:04.256 "params": { 00:36:04.256 "name": "Nvme$subsystem", 00:36:04.256 "trtype": "$TEST_TRANSPORT", 00:36:04.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:04.256 "adrfam": "ipv4", 00:36:04.256 "trsvcid": "$NVMF_PORT", 00:36:04.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:04.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:04.256 "hdgst": ${hdgst:-false}, 00:36:04.256 "ddgst": ${ddgst:-false} 00:36:04.256 }, 00:36:04.256 "method": "bdev_nvme_attach_controller" 00:36:04.256 } 00:36:04.256 EOF 00:36:04.256 )") 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:36:04.256 { 00:36:04.256 "params": { 00:36:04.256 "name": "Nvme$subsystem", 00:36:04.256 "trtype": "$TEST_TRANSPORT", 00:36:04.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:04.256 "adrfam": "ipv4", 00:36:04.256 "trsvcid": "$NVMF_PORT", 00:36:04.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:04.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:04.256 "hdgst": ${hdgst:-false}, 00:36:04.256 "ddgst": ${ddgst:-false} 00:36:04.256 }, 00:36:04.256 "method": "bdev_nvme_attach_controller" 00:36:04.256 } 00:36:04.256 EOF 00:36:04.256 )") 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:36:04.256 { 00:36:04.256 "params": { 00:36:04.256 "name": "Nvme$subsystem", 00:36:04.256 "trtype": "$TEST_TRANSPORT", 00:36:04.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:04.256 "adrfam": "ipv4", 00:36:04.256 "trsvcid": "$NVMF_PORT", 00:36:04.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:04.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:04.256 "hdgst": ${hdgst:-false}, 00:36:04.256 "ddgst": ${ddgst:-false} 00:36:04.256 }, 00:36:04.256 "method": "bdev_nvme_attach_controller" 00:36:04.256 } 00:36:04.256 EOF 00:36:04.256 )") 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:36:04.256 "params": { 00:36:04.256 "name": "Nvme0", 00:36:04.256 "trtype": "tcp", 00:36:04.256 "traddr": "10.0.0.2", 00:36:04.256 "adrfam": "ipv4", 00:36:04.256 "trsvcid": "4420", 00:36:04.256 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:04.256 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:04.256 "hdgst": false, 00:36:04.256 "ddgst": false 00:36:04.256 }, 00:36:04.256 "method": "bdev_nvme_attach_controller" 00:36:04.256 },{ 00:36:04.256 "params": { 00:36:04.256 "name": "Nvme1", 00:36:04.256 "trtype": "tcp", 00:36:04.256 "traddr": "10.0.0.2", 00:36:04.256 "adrfam": "ipv4", 00:36:04.256 "trsvcid": "4420", 00:36:04.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:04.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:04.256 "hdgst": false, 00:36:04.256 "ddgst": false 00:36:04.256 }, 00:36:04.256 "method": "bdev_nvme_attach_controller" 00:36:04.256 },{ 00:36:04.256 "params": { 00:36:04.256 "name": "Nvme2", 00:36:04.256 "trtype": "tcp", 00:36:04.256 "traddr": "10.0.0.2", 00:36:04.256 "adrfam": "ipv4", 00:36:04.256 "trsvcid": "4420", 00:36:04.256 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:04.256 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:04.256 "hdgst": false, 00:36:04.256 "ddgst": false 00:36:04.256 }, 00:36:04.256 "method": "bdev_nvme_attach_controller" 00:36:04.256 }' 00:36:04.256 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:04.257 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:04.257 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:04.257 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:04.257 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:04.257 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:04.257 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:04.257 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:04.257 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:04.257 12:11:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.257 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:04.257 ... 00:36:04.257 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:04.257 ... 00:36:04.257 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:04.257 ... 00:36:04.257 fio-3.35 00:36:04.257 Starting 24 threads 00:36:16.498 00:36:16.498 filename0: (groupid=0, jobs=1): err= 0: pid=352759: Mon Dec 9 12:11:22 2024 00:36:16.498 read: IOPS=682, BW=2730KiB/s (2795kB/s)(26.7MiB/10012msec) 00:36:16.498 slat (nsec): min=5659, max=70253, avg=9100.58, stdev=5463.32 00:36:16.498 clat (usec): min=1346, max=35836, avg=23370.83, stdev=3509.54 00:36:16.498 lat (usec): min=1362, max=35842, avg=23379.93, stdev=3507.75 00:36:16.498 clat percentiles (usec): 00:36:16.498 | 1.00th=[ 1909], 5.00th=[22152], 10.00th=[23462], 20.00th=[23725], 00:36:16.498 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:36:16.498 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:16.498 | 99.00th=[25822], 99.50th=[27919], 99.90th=[34866], 99.95th=[35914], 00:36:16.498 | 99.99th=[35914] 00:36:16.498 bw ( KiB/s): min= 2560, max= 3832, per=4.26%, avg=2734.74, stdev=272.11, samples=19 00:36:16.498 iops : min= 640, max= 958, avg=683.68, stdev=68.03, samples=19 00:36:16.498 lat (msec) : 2=1.08%, 4=0.79%, 10=0.57%, 20=2.17%, 50=95.39% 00:36:16.498 cpu : usr=98.67%, sys=0.95%, ctx=56, majf=0, minf=45 00:36:16.498 IO depths : 1=5.4%, 2=11.6%, 4=24.7%, 8=51.2%, 16=7.1%, 32=0.0%, >=64=0.0% 00:36:16.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.498 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.498 issued rwts: total=6832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.498 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.498 filename0: (groupid=0, jobs=1): err= 0: pid=352760: Mon Dec 9 12:11:22 2024 00:36:16.498 read: IOPS=667, BW=2670KiB/s (2734kB/s)(26.1MiB/10004msec) 00:36:16.498 slat (usec): min=5, max=113, avg=20.27, stdev=16.43 00:36:16.498 clat (usec): min=8897, max=42500, avg=23783.00, stdev=1813.66 00:36:16.498 lat (usec): min=8905, max=42509, avg=23803.27, stdev=1812.43 00:36:16.498 clat percentiles (usec): 00:36:16.498 | 1.00th=[13566], 5.00th=[22938], 10.00th=[23462], 20.00th=[23462], 00:36:16.498 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:16.498 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:16.498 | 99.00th=[27395], 99.50th=[29754], 99.90th=[42206], 99.95th=[42730], 00:36:16.498 | 99.99th=[42730] 00:36:16.498 bw ( KiB/s): min= 2560, max= 2816, per=4.16%, avg=2670.32, stdev=66.16, samples=19 00:36:16.498 iops : min= 640, max= 704, avg=667.58, stdev=16.54, samples=19 00:36:16.498 lat (msec) : 10=0.06%, 20=1.81%, 50=98.13% 00:36:16.498 cpu : usr=98.83%, sys=0.86%, ctx=63, majf=0, minf=30 00:36:16.498 IO depths : 1=5.7%, 2=11.7%, 4=24.3%, 8=51.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:16.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.498 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.498 issued rwts: total=6678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.498 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.498 filename0: (groupid=0, jobs=1): err= 0: pid=352761: Mon Dec 9 12:11:22 2024 00:36:16.498 read: IOPS=666, BW=2666KiB/s (2730kB/s)(26.1MiB/10006msec) 00:36:16.498 slat (usec): min=5, max=145, avg=31.84, stdev=23.37 00:36:16.498 clat (usec): min=9675, max=49655, avg=23712.19, stdev=2876.39 00:36:16.498 lat (usec): min=9682, max=49672, avg=23744.04, stdev=2877.41 00:36:16.498 clat percentiles (usec): 00:36:16.498 | 1.00th=[14353], 5.00th=[19530], 10.00th=[22938], 20.00th=[23200], 00:36:16.498 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:16.498 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[26346], 00:36:16.498 | 99.00th=[36963], 99.50th=[40109], 99.90th=[49546], 99.95th=[49546], 00:36:16.498 | 99.99th=[49546] 00:36:16.498 bw ( KiB/s): min= 2432, max= 2816, per=4.14%, avg=2655.16, stdev=86.41, samples=19 00:36:16.498 iops : min= 608, max= 704, avg=663.79, stdev=21.60, samples=19 00:36:16.498 lat (msec) : 10=0.06%, 20=5.50%, 50=94.44% 00:36:16.498 cpu : usr=98.09%, sys=1.16%, ctx=201, majf=0, minf=21 00:36:16.498 IO depths : 1=4.3%, 2=8.8%, 4=20.7%, 8=57.6%, 16=8.6%, 32=0.0%, >=64=0.0% 00:36:16.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.498 complete : 0=0.0%, 4=93.2%, 8=1.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.498 issued rwts: total=6670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.498 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.498 filename0: (groupid=0, jobs=1): err= 0: pid=352762: Mon Dec 9 12:11:22 2024 00:36:16.498 read: IOPS=665, BW=2662KiB/s (2725kB/s)(26.0MiB/10003msec) 00:36:16.498 slat (usec): min=5, max=158, avg=24.37, stdev=20.54 00:36:16.498 clat (usec): min=10805, max=32360, avg=23854.62, stdev=1005.64 00:36:16.498 lat (usec): min=10839, max=32369, avg=23879.00, stdev=1003.04 00:36:16.498 clat percentiles (usec): 00:36:16.498 | 1.00th=[21627], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:16.498 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:16.498 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:16.498 | 99.00th=[25560], 99.50th=[26608], 99.90th=[29754], 99.95th=[32375], 00:36:16.498 | 99.99th=[32375] 00:36:16.498 bw ( KiB/s): min= 2560, max= 2688, per=4.15%, avg=2661.05, stdev=51.72, samples=19 00:36:16.498 iops : min= 640, max= 672, avg=665.26, stdev=12.93, samples=19 00:36:16.498 lat (msec) : 20=0.63%, 50=99.37% 00:36:16.498 cpu : usr=98.75%, sys=0.97%, ctx=12, majf=0, minf=25 00:36:16.498 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:16.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.499 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.499 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.499 filename0: (groupid=0, jobs=1): err= 0: pid=352763: Mon Dec 9 12:11:22 2024 00:36:16.499 read: IOPS=668, BW=2674KiB/s (2739kB/s)(26.1MiB/10003msec) 00:36:16.499 slat (usec): min=5, max=136, avg=18.46, stdev=19.77 00:36:16.499 clat (usec): min=10153, max=33551, avg=23791.85, stdev=1501.21 00:36:16.499 lat (usec): min=10181, max=33562, avg=23810.31, stdev=1499.22 00:36:16.499 clat percentiles (usec): 00:36:16.499 | 1.00th=[13435], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:36:16.499 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:16.499 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:16.499 | 99.00th=[25822], 99.50th=[26870], 99.90th=[28181], 99.95th=[33424], 00:36:16.499 | 99.99th=[33424] 00:36:16.499 bw ( KiB/s): min= 2560, max= 2821, per=4.17%, avg=2674.79, stdev=70.16, samples=19 00:36:16.499 iops : min= 640, max= 705, avg=668.68, stdev=17.51, samples=19 00:36:16.499 lat (msec) : 20=1.50%, 50=98.50% 00:36:16.499 cpu : usr=98.21%, sys=1.11%, ctx=131, majf=0, minf=23 00:36:16.499 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:16.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.499 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.499 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.499 filename0: (groupid=0, jobs=1): err= 0: pid=352764: Mon Dec 9 12:11:22 2024 00:36:16.499 read: IOPS=666, BW=2664KiB/s (2728kB/s)(26.0MiB/10003msec) 00:36:16.499 slat (usec): min=5, max=132, avg=26.29, stdev=22.93 00:36:16.499 clat (usec): min=3426, max=45189, avg=23809.06, stdev=3920.15 00:36:16.499 lat (usec): min=3432, max=45196, avg=23835.34, stdev=3921.24 00:36:16.499 clat percentiles (usec): 00:36:16.499 | 1.00th=[13435], 5.00th=[17171], 10.00th=[20841], 20.00th=[23200], 00:36:16.499 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:16.499 | 70.00th=[23987], 80.00th=[24511], 90.00th=[26084], 95.00th=[29754], 00:36:16.499 | 99.00th=[40633], 99.50th=[41681], 99.90th=[43779], 99.95th=[45351], 00:36:16.499 | 99.99th=[45351] 00:36:16.499 bw ( KiB/s): min= 2436, max= 2832, per=4.13%, avg=2653.68, stdev=79.75, samples=19 00:36:16.499 iops : min= 609, max= 708, avg=663.42, stdev=19.94, samples=19 00:36:16.499 lat (msec) : 4=0.09%, 10=0.45%, 20=8.71%, 50=90.75% 00:36:16.499 cpu : usr=98.95%, sys=0.77%, ctx=14, majf=0, minf=29 00:36:16.499 IO depths : 1=2.8%, 2=5.7%, 4=15.3%, 8=65.4%, 16=10.8%, 32=0.0%, >=64=0.0% 00:36:16.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.499 complete : 0=0.0%, 4=91.6%, 8=3.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.499 issued rwts: total=6662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.499 filename0: (groupid=0, jobs=1): err= 0: pid=352765: Mon Dec 9 12:11:22 2024 00:36:16.499 read: IOPS=670, BW=2683KiB/s (2748kB/s)(26.2MiB/10018msec) 00:36:16.499 slat (usec): min=5, max=120, avg=14.73, stdev=11.55 00:36:16.499 clat (usec): min=3817, max=40452, avg=23732.34, stdev=2100.86 00:36:16.499 lat (usec): min=3849, max=40463, avg=23747.07, stdev=2099.08 00:36:16.499 clat percentiles (usec): 00:36:16.499 | 1.00th=[10683], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:36:16.499 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:16.499 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:16.499 | 99.00th=[25297], 99.50th=[25822], 99.90th=[40633], 99.95th=[40633], 00:36:16.499 | 99.99th=[40633] 00:36:16.499 bw ( KiB/s): min= 2560, max= 2944, per=4.18%, avg=2681.60, stdev=77.42, samples=20 00:36:16.499 iops : min= 640, max= 736, avg=670.40, stdev=19.35, samples=20 00:36:16.499 lat (msec) : 4=0.13%, 10=0.70%, 20=1.19%, 50=97.98% 00:36:16.499 cpu : usr=98.78%, sys=0.93%, ctx=8, majf=0, minf=32 00:36:16.499 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:16.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.499 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.499 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.499 filename0: (groupid=0, jobs=1): err= 0: pid=352766: Mon Dec 9 12:11:22 2024 00:36:16.499 read: IOPS=666, BW=2667KiB/s (2731kB/s)(26.1MiB/10006msec) 00:36:16.499 slat (usec): min=5, max=141, avg=32.58, stdev=23.30 00:36:16.499 clat (usec): min=10547, max=28184, avg=23715.42, stdev=1141.49 00:36:16.499 lat (usec): min=10559, max=28206, avg=23748.00, stdev=1140.43 00:36:16.499 clat percentiles (usec): 00:36:16.499 | 1.00th=[20579], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:36:16.499 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:16.499 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:16.499 | 99.00th=[25560], 99.50th=[26608], 99.90th=[27919], 99.95th=[28181], 00:36:16.499 | 99.99th=[28181] 00:36:16.499 bw ( KiB/s): min= 2560, max= 2688, per=4.15%, avg=2667.79, stdev=47.95, samples=19 00:36:16.499 iops : min= 640, max= 672, avg=666.95, stdev=11.99, samples=19 00:36:16.499 lat (msec) : 20=0.99%, 50=99.01% 00:36:16.499 cpu : usr=99.00%, sys=0.72%, ctx=25, majf=0, minf=24 00:36:16.499 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:36:16.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.499 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.499 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.499 filename1: (groupid=0, jobs=1): err= 0: pid=352767: Mon Dec 9 12:11:22 2024 00:36:16.499 read: IOPS=673, BW=2694KiB/s (2759kB/s)(26.3MiB/10012msec) 00:36:16.499 slat (usec): min=5, max=133, avg=27.82, stdev=22.71 00:36:16.499 clat (usec): min=10980, max=41599, avg=23474.93, stdev=2153.14 00:36:16.499 lat (usec): min=10986, max=41618, avg=23502.76, stdev=2155.24 00:36:16.499 clat percentiles (usec): 00:36:16.499 | 1.00th=[15008], 5.00th=[19268], 10.00th=[22938], 20.00th=[23200], 00:36:16.499 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:16.499 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:16.499 | 99.00th=[31327], 99.50th=[32113], 99.90th=[34866], 99.95th=[38011], 00:36:16.499 | 99.99th=[41681] 00:36:16.499 bw ( KiB/s): min= 2560, max= 3008, per=4.19%, avg=2690.53, stdev=104.00, samples=19 00:36:16.499 iops : min= 640, max= 752, avg=672.63, stdev=26.00, samples=19 00:36:16.499 lat (msec) : 20=5.84%, 50=94.16% 00:36:16.499 cpu : usr=98.90%, sys=0.82%, ctx=13, majf=0, minf=47 00:36:16.499 IO depths : 1=5.3%, 2=10.8%, 4=22.5%, 8=54.0%, 16=7.4%, 32=0.0%, >=64=0.0% 00:36:16.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.499 complete : 0=0.0%, 4=93.4%, 8=1.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.499 issued rwts: total=6744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.499 filename1: (groupid=0, jobs=1): err= 0: pid=352768: Mon Dec 9 12:11:22 2024 00:36:16.499 read: IOPS=665, BW=2660KiB/s (2724kB/s)(26.0MiB/10009msec) 00:36:16.499 slat (nsec): min=5665, max=63841, avg=10035.47, stdev=5979.84 00:36:16.499 clat (usec): min=8726, max=35368, avg=23971.23, stdev=1503.74 00:36:16.499 lat (usec): min=8732, max=35376, avg=23981.27, stdev=1503.57 00:36:16.499 clat percentiles (usec): 00:36:16.499 | 1.00th=[18220], 5.00th=[22938], 10.00th=[23462], 20.00th=[23725], 00:36:16.499 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:16.499 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:36:16.499 | 99.00th=[29230], 99.50th=[32375], 99.90th=[33817], 99.95th=[33817], 00:36:16.499 | 99.99th=[35390] 00:36:16.499 bw ( KiB/s): min= 2560, max= 2688, per=4.13%, avg=2654.32, stdev=56.16, samples=19 00:36:16.499 iops : min= 640, max= 672, avg=663.58, stdev=14.04, samples=19 00:36:16.499 lat (msec) : 10=0.03%, 20=2.03%, 50=97.94% 00:36:16.499 cpu : usr=98.54%, sys=0.93%, ctx=80, majf=0, minf=28 00:36:16.499 IO depths : 1=4.8%, 2=11.0%, 4=24.6%, 8=52.0%, 16=7.7%, 32=0.0%, >=64=0.0% 00:36:16.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.499 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.499 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.499 filename1: (groupid=0, jobs=1): err= 0: pid=352769: Mon Dec 9 12:11:22 2024 00:36:16.499 read: IOPS=667, BW=2669KiB/s (2733kB/s)(26.1MiB/10004msec) 00:36:16.499 slat (usec): min=5, max=129, avg=25.11, stdev=15.63 00:36:16.499 clat (usec): min=7198, max=41898, avg=23784.12, stdev=1978.05 00:36:16.499 lat (usec): min=7205, max=41915, avg=23809.23, stdev=1978.66 00:36:16.499 clat percentiles (usec): 00:36:16.499 | 1.00th=[14484], 5.00th=[22938], 10.00th=[23462], 20.00th=[23462], 00:36:16.499 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:16.500 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:16.500 | 99.00th=[28705], 99.50th=[33424], 99.90th=[41681], 99.95th=[41681], 00:36:16.500 | 99.99th=[41681] 00:36:16.500 bw ( KiB/s): min= 2432, max= 2720, per=4.15%, avg=2661.89, stdev=68.14, samples=19 00:36:16.500 iops : min= 608, max= 680, avg=665.47, stdev=17.03, samples=19 00:36:16.500 lat (msec) : 10=0.24%, 20=2.16%, 50=97.60% 00:36:16.500 cpu : usr=98.75%, sys=0.95%, ctx=25, majf=0, minf=31 00:36:16.500 IO depths : 1=3.8%, 2=9.3%, 4=22.5%, 8=55.3%, 16=9.1%, 32=0.0%, >=64=0.0% 00:36:16.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.500 complete : 0=0.0%, 4=93.6%, 8=1.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.500 issued rwts: total=6674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.500 filename1: (groupid=0, jobs=1): err= 0: pid=352770: Mon Dec 9 12:11:22 2024 00:36:16.500 read: IOPS=673, BW=2693KiB/s (2758kB/s)(26.3MiB/10007msec) 00:36:16.500 slat (usec): min=5, max=118, avg=21.40, stdev=18.40 00:36:16.500 clat (usec): min=7748, max=40962, avg=23587.78, stdev=2143.31 00:36:16.500 lat (usec): min=7757, max=40970, avg=23609.18, stdev=2143.88 00:36:16.500 clat percentiles (usec): 00:36:16.500 | 1.00th=[14746], 5.00th=[20317], 10.00th=[23200], 20.00th=[23462], 00:36:16.500 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:16.500 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:16.500 | 99.00th=[27919], 99.50th=[28705], 99.90th=[41157], 99.95th=[41157], 00:36:16.500 | 99.99th=[41157] 00:36:16.500 bw ( KiB/s): min= 2560, max= 3344, per=4.20%, avg=2694.74, stdev=165.88, samples=19 00:36:16.500 iops : min= 640, max= 836, avg=673.68, stdev=41.47, samples=19 00:36:16.500 lat (msec) : 10=0.18%, 20=4.63%, 50=95.19% 00:36:16.500 cpu : usr=99.00%, sys=0.69%, ctx=61, majf=0, minf=41 00:36:16.500 IO depths : 1=4.8%, 2=10.6%, 4=23.6%, 8=53.2%, 16=7.7%, 32=0.0%, >=64=0.0% 00:36:16.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.500 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.500 issued rwts: total=6738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.500 filename1: (groupid=0, jobs=1): err= 0: pid=352771: Mon Dec 9 12:11:22 2024 00:36:16.500 read: IOPS=665, BW=2661KiB/s (2725kB/s)(26.0MiB/10005msec) 00:36:16.500 slat (usec): min=5, max=119, avg=30.12, stdev=17.64 00:36:16.500 clat (usec): min=5371, max=45559, avg=23776.61, stdev=1610.13 00:36:16.500 lat (usec): min=5377, max=45573, avg=23806.72, stdev=1610.36 00:36:16.500 clat percentiles (usec): 00:36:16.500 | 1.00th=[19792], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:16.500 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:16.500 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:16.500 | 99.00th=[26870], 99.50th=[28181], 99.90th=[42206], 99.95th=[42206], 00:36:16.500 | 99.99th=[45351] 00:36:16.500 bw ( KiB/s): min= 2432, max= 2688, per=4.13%, avg=2654.32, stdev=71.93, samples=19 00:36:16.500 iops : min= 608, max= 672, avg=663.58, stdev=17.98, samples=19 00:36:16.500 lat (msec) : 10=0.24%, 20=0.86%, 50=98.90% 00:36:16.500 cpu : usr=98.71%, sys=0.90%, ctx=99, majf=0, minf=30 00:36:16.500 IO depths : 1=4.8%, 2=11.0%, 4=24.8%, 8=51.7%, 16=7.7%, 32=0.0%, >=64=0.0% 00:36:16.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.500 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.500 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.500 filename1: (groupid=0, jobs=1): err= 0: pid=352772: Mon Dec 9 12:11:22 2024 00:36:16.500 read: IOPS=678, BW=2714KiB/s (2779kB/s)(26.6MiB/10018msec) 00:36:16.500 slat (usec): min=5, max=134, avg=10.67, stdev= 8.26 00:36:16.500 clat (usec): min=6261, max=44324, avg=23495.18, stdev=2518.83 00:36:16.500 lat (usec): min=6267, max=44332, avg=23505.85, stdev=2518.15 00:36:16.500 clat percentiles (usec): 00:36:16.500 | 1.00th=[11338], 5.00th=[20317], 10.00th=[23200], 20.00th=[23725], 00:36:16.500 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:16.500 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:16.500 | 99.00th=[25297], 99.50th=[28181], 99.90th=[42206], 99.95th=[44303], 00:36:16.500 | 99.99th=[44303] 00:36:16.500 bw ( KiB/s): min= 2560, max= 3232, per=4.22%, avg=2712.80, stdev=142.59, samples=20 00:36:16.500 iops : min= 640, max= 808, avg=678.20, stdev=35.65, samples=20 00:36:16.500 lat (msec) : 10=0.76%, 20=3.68%, 50=95.56% 00:36:16.500 cpu : usr=98.94%, sys=0.76%, ctx=14, majf=0, minf=42 00:36:16.500 IO depths : 1=4.9%, 2=10.2%, 4=23.2%, 8=54.1%, 16=7.6%, 32=0.0%, >=64=0.0% 00:36:16.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.500 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.500 issued rwts: total=6798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.500 filename1: (groupid=0, jobs=1): err= 0: pid=352773: Mon Dec 9 12:11:22 2024 00:36:16.500 read: IOPS=664, BW=2658KiB/s (2722kB/s)(26.0MiB/10001msec) 00:36:16.500 slat (usec): min=5, max=109, avg=29.17, stdev=16.15 00:36:16.500 clat (usec): min=11879, max=41968, avg=23807.68, stdev=1425.22 00:36:16.500 lat (usec): min=11885, max=41986, avg=23836.86, stdev=1425.54 00:36:16.500 clat percentiles (usec): 00:36:16.500 | 1.00th=[19530], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:16.500 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:16.500 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:16.500 | 99.00th=[26870], 99.50th=[28443], 99.90th=[41681], 99.95th=[41681], 00:36:16.500 | 99.99th=[42206] 00:36:16.500 bw ( KiB/s): min= 2432, max= 2688, per=4.13%, avg=2653.47, stdev=71.61, samples=19 00:36:16.500 iops : min= 608, max= 672, avg=663.37, stdev=17.90, samples=19 00:36:16.500 lat (msec) : 20=1.11%, 50=98.89% 00:36:16.500 cpu : usr=98.91%, sys=0.80%, ctx=10, majf=0, minf=27 00:36:16.500 IO depths : 1=4.8%, 2=11.0%, 4=24.8%, 8=51.6%, 16=7.7%, 32=0.0%, >=64=0.0% 00:36:16.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.500 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.500 issued rwts: total=6646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.500 filename1: (groupid=0, jobs=1): err= 0: pid=352774: Mon Dec 9 12:11:22 2024 00:36:16.500 read: IOPS=679, BW=2717KiB/s (2782kB/s)(26.6MiB/10018msec) 00:36:16.500 slat (usec): min=5, max=143, avg=11.94, stdev=10.44 00:36:16.500 clat (usec): min=8854, max=41877, avg=23460.85, stdev=2590.70 00:36:16.500 lat (usec): min=8862, max=41885, avg=23472.79, stdev=2590.37 00:36:16.500 clat percentiles (usec): 00:36:16.500 | 1.00th=[12256], 5.00th=[17171], 10.00th=[22938], 20.00th=[23725], 00:36:16.500 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:16.500 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:16.500 | 99.00th=[29230], 99.50th=[31327], 99.90th=[40633], 99.95th=[41157], 00:36:16.500 | 99.99th=[41681] 00:36:16.500 bw ( KiB/s): min= 2560, max= 2992, per=4.23%, avg=2715.20, stdev=120.42, samples=20 00:36:16.500 iops : min= 640, max= 748, avg=678.80, stdev=30.10, samples=20 00:36:16.500 lat (msec) : 10=0.09%, 20=6.79%, 50=93.12% 00:36:16.500 cpu : usr=98.32%, sys=1.08%, ctx=98, majf=0, minf=29 00:36:16.500 IO depths : 1=4.9%, 2=10.5%, 4=23.2%, 8=53.8%, 16=7.7%, 32=0.0%, >=64=0.0% 00:36:16.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.500 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.500 issued rwts: total=6804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.500 filename2: (groupid=0, jobs=1): err= 0: pid=352775: Mon Dec 9 12:11:22 2024 00:36:16.500 read: IOPS=672, BW=2691KiB/s (2756kB/s)(26.3MiB/10009msec) 00:36:16.500 slat (usec): min=5, max=139, avg=28.58, stdev=22.82 00:36:16.500 clat (usec): min=9138, max=41736, avg=23545.85, stdev=2525.70 00:36:16.500 lat (usec): min=9145, max=41771, avg=23574.43, stdev=2527.98 00:36:16.500 clat percentiles (usec): 00:36:16.500 | 1.00th=[13698], 5.00th=[18744], 10.00th=[22152], 20.00th=[23200], 00:36:16.500 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:16.500 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[26084], 00:36:16.500 | 99.00th=[30278], 99.50th=[33817], 99.90th=[40633], 99.95th=[41681], 00:36:16.500 | 99.99th=[41681] 00:36:16.500 bw ( KiB/s): min= 2549, max= 2944, per=4.18%, avg=2684.89, stdev=88.58, samples=19 00:36:16.500 iops : min= 637, max= 736, avg=671.21, stdev=22.17, samples=19 00:36:16.500 lat (msec) : 10=0.09%, 20=6.95%, 50=92.96% 00:36:16.500 cpu : usr=98.72%, sys=0.90%, ctx=80, majf=0, minf=35 00:36:16.500 IO depths : 1=2.3%, 2=5.7%, 4=17.3%, 8=64.1%, 16=10.5%, 32=0.0%, >=64=0.0% 00:36:16.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.500 complete : 0=0.0%, 4=92.2%, 8=2.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.500 issued rwts: total=6734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.501 filename2: (groupid=0, jobs=1): err= 0: pid=352776: Mon Dec 9 12:11:22 2024 00:36:16.501 read: IOPS=663, BW=2656KiB/s (2719kB/s)(26.0MiB/10010msec) 00:36:16.501 slat (usec): min=5, max=144, avg=29.72, stdev=20.83 00:36:16.501 clat (usec): min=11383, max=42091, avg=23821.01, stdev=1903.49 00:36:16.501 lat (usec): min=11389, max=42111, avg=23850.74, stdev=1903.10 00:36:16.501 clat percentiles (usec): 00:36:16.501 | 1.00th=[16450], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:36:16.501 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:16.501 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:16.501 | 99.00th=[32113], 99.50th=[35390], 99.90th=[42206], 99.95th=[42206], 00:36:16.501 | 99.99th=[42206] 00:36:16.501 bw ( KiB/s): min= 2432, max= 2704, per=4.13%, avg=2650.11, stdev=71.79, samples=19 00:36:16.501 iops : min= 608, max= 676, avg=662.53, stdev=17.95, samples=19 00:36:16.501 lat (msec) : 20=2.00%, 50=98.00% 00:36:16.501 cpu : usr=98.95%, sys=0.77%, ctx=24, majf=0, minf=28 00:36:16.501 IO depths : 1=4.8%, 2=10.4%, 4=23.3%, 8=53.7%, 16=7.8%, 32=0.0%, >=64=0.0% 00:36:16.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.501 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.501 issued rwts: total=6646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.501 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.501 filename2: (groupid=0, jobs=1): err= 0: pid=352777: Mon Dec 9 12:11:22 2024 00:36:16.501 read: IOPS=667, BW=2671KiB/s (2735kB/s)(26.1MiB/10003msec) 00:36:16.501 slat (usec): min=5, max=144, avg=27.45, stdev=24.09 00:36:16.501 clat (usec): min=5045, max=43639, avg=23741.29, stdev=3655.55 00:36:16.501 lat (usec): min=5052, max=43646, avg=23768.74, stdev=3656.02 00:36:16.501 clat percentiles (usec): 00:36:16.501 | 1.00th=[12911], 5.00th=[17695], 10.00th=[20055], 20.00th=[23200], 00:36:16.501 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:36:16.501 | 70.00th=[23987], 80.00th=[24511], 90.00th=[26084], 95.00th=[30016], 00:36:16.501 | 99.00th=[36963], 99.50th=[40109], 99.90th=[41157], 99.95th=[43779], 00:36:16.501 | 99.99th=[43779] 00:36:16.501 bw ( KiB/s): min= 2436, max= 2848, per=4.14%, avg=2657.05, stdev=102.31, samples=19 00:36:16.501 iops : min= 609, max= 712, avg=664.26, stdev=25.58, samples=19 00:36:16.501 lat (msec) : 10=0.45%, 20=9.03%, 50=90.52% 00:36:16.501 cpu : usr=98.92%, sys=0.80%, ctx=9, majf=0, minf=36 00:36:16.501 IO depths : 1=2.1%, 2=4.3%, 4=12.1%, 8=69.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:36:16.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.501 complete : 0=0.0%, 4=91.1%, 8=5.0%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.501 issued rwts: total=6680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.501 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.501 filename2: (groupid=0, jobs=1): err= 0: pid=352778: Mon Dec 9 12:11:22 2024 00:36:16.501 read: IOPS=665, BW=2660KiB/s (2724kB/s)(26.0MiB/10009msec) 00:36:16.501 slat (usec): min=5, max=136, avg=33.36, stdev=22.40 00:36:16.501 clat (usec): min=11695, max=33281, avg=23749.69, stdev=1008.62 00:36:16.501 lat (usec): min=11701, max=33309, avg=23783.06, stdev=1008.39 00:36:16.501 clat percentiles (usec): 00:36:16.501 | 1.00th=[21890], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:16.501 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:16.501 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24511], 95.00th=[24773], 00:36:16.501 | 99.00th=[26608], 99.50th=[28181], 99.90th=[30016], 99.95th=[30540], 00:36:16.501 | 99.99th=[33162] 00:36:16.501 bw ( KiB/s): min= 2560, max= 2688, per=4.13%, avg=2654.32, stdev=56.16, samples=19 00:36:16.501 iops : min= 640, max= 672, avg=663.58, stdev=14.04, samples=19 00:36:16.501 lat (msec) : 20=0.78%, 50=99.22% 00:36:16.501 cpu : usr=97.82%, sys=1.40%, ctx=263, majf=0, minf=27 00:36:16.501 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:36:16.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.501 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.501 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.501 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.501 filename2: (groupid=0, jobs=1): err= 0: pid=352779: Mon Dec 9 12:11:22 2024 00:36:16.501 read: IOPS=666, BW=2665KiB/s (2729kB/s)(26.0MiB/10004msec) 00:36:16.501 slat (usec): min=5, max=146, avg=28.91, stdev=23.68 00:36:16.501 clat (usec): min=5330, max=42213, avg=23741.90, stdev=2214.13 00:36:16.501 lat (usec): min=5336, max=42227, avg=23770.81, stdev=2214.11 00:36:16.501 clat percentiles (usec): 00:36:16.501 | 1.00th=[15139], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:36:16.501 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:16.501 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:16.501 | 99.00th=[33162], 99.50th=[36439], 99.90th=[42206], 99.95th=[42206], 00:36:16.501 | 99.99th=[42206] 00:36:16.501 bw ( KiB/s): min= 2432, max= 2784, per=4.14%, avg=2656.00, stdev=76.92, samples=19 00:36:16.501 iops : min= 608, max= 696, avg=664.00, stdev=19.23, samples=19 00:36:16.501 lat (msec) : 10=0.24%, 20=2.63%, 50=97.13% 00:36:16.501 cpu : usr=98.72%, sys=0.91%, ctx=55, majf=0, minf=45 00:36:16.501 IO depths : 1=4.2%, 2=8.8%, 4=18.7%, 8=58.6%, 16=9.6%, 32=0.0%, >=64=0.0% 00:36:16.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.501 complete : 0=0.0%, 4=92.8%, 8=2.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.501 issued rwts: total=6666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.501 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.501 filename2: (groupid=0, jobs=1): err= 0: pid=352780: Mon Dec 9 12:11:22 2024 00:36:16.501 read: IOPS=666, BW=2666KiB/s (2730kB/s)(26.1MiB/10012msec) 00:36:16.501 slat (usec): min=5, max=147, avg=16.70, stdev=17.17 00:36:16.501 clat (usec): min=10961, max=40778, avg=23874.35, stdev=1766.67 00:36:16.501 lat (usec): min=11014, max=40788, avg=23891.05, stdev=1765.01 00:36:16.501 clat percentiles (usec): 00:36:16.501 | 1.00th=[13829], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:36:16.501 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:16.501 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:16.501 | 99.00th=[28181], 99.50th=[32375], 99.90th=[40633], 99.95th=[40633], 00:36:16.501 | 99.99th=[40633] 00:36:16.501 bw ( KiB/s): min= 2560, max= 2805, per=4.16%, avg=2668.89, stdev=58.95, samples=19 00:36:16.501 iops : min= 640, max= 701, avg=667.21, stdev=14.71, samples=19 00:36:16.501 lat (msec) : 20=2.05%, 50=97.95% 00:36:16.501 cpu : usr=98.98%, sys=0.73%, ctx=15, majf=0, minf=36 00:36:16.501 IO depths : 1=5.5%, 2=11.5%, 4=24.5%, 8=51.5%, 16=7.0%, 32=0.0%, >=64=0.0% 00:36:16.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.501 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.501 issued rwts: total=6674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.501 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.501 filename2: (groupid=0, jobs=1): err= 0: pid=352781: Mon Dec 9 12:11:22 2024 00:36:16.501 read: IOPS=661, BW=2647KiB/s (2710kB/s)(25.9MiB/10004msec) 00:36:16.501 slat (usec): min=5, max=109, avg=16.63, stdev=14.24 00:36:16.501 clat (usec): min=3567, max=57763, avg=24104.75, stdev=3272.91 00:36:16.501 lat (usec): min=3572, max=57785, avg=24121.38, stdev=3273.22 00:36:16.501 clat percentiles (usec): 00:36:16.501 | 1.00th=[14353], 5.00th=[19792], 10.00th=[23200], 20.00th=[23725], 00:36:16.501 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:36:16.501 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[28967], 00:36:16.501 | 99.00th=[34866], 99.50th=[35914], 99.90th=[57934], 99.95th=[57934], 00:36:16.501 | 99.99th=[57934] 00:36:16.501 bw ( KiB/s): min= 2436, max= 2688, per=4.10%, avg=2632.63, stdev=65.70, samples=19 00:36:16.501 iops : min= 609, max= 672, avg=658.16, stdev=16.43, samples=19 00:36:16.501 lat (msec) : 4=0.11%, 10=0.24%, 20=5.14%, 50=94.27%, 100=0.24% 00:36:16.501 cpu : usr=97.37%, sys=1.66%, ctx=500, majf=0, minf=33 00:36:16.501 IO depths : 1=0.4%, 2=1.0%, 4=4.2%, 8=78.0%, 16=16.3%, 32=0.0%, >=64=0.0% 00:36:16.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.501 complete : 0=0.0%, 4=89.9%, 8=8.4%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.501 issued rwts: total=6619,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.501 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.501 filename2: (groupid=0, jobs=1): err= 0: pid=352782: Mon Dec 9 12:11:22 2024 00:36:16.501 read: IOPS=678, BW=2714KiB/s (2779kB/s)(26.5MiB/10013msec) 00:36:16.501 slat (usec): min=5, max=161, avg=14.08, stdev=12.78 00:36:16.501 clat (usec): min=10030, max=42393, avg=23463.77, stdev=2915.93 00:36:16.501 lat (usec): min=10037, max=42398, avg=23477.85, stdev=2915.83 00:36:16.501 clat percentiles (usec): 00:36:16.501 | 1.00th=[12911], 5.00th=[16581], 10.00th=[21627], 20.00th=[23462], 00:36:16.501 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:16.501 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:36:16.501 | 99.00th=[32637], 99.50th=[34341], 99.90th=[42206], 99.95th=[42206], 00:36:16.501 | 99.99th=[42206] 00:36:16.501 bw ( KiB/s): min= 2560, max= 3088, per=4.24%, avg=2719.16, stdev=121.56, samples=19 00:36:16.501 iops : min= 640, max= 772, avg=679.79, stdev=30.39, samples=19 00:36:16.501 lat (msec) : 20=8.36%, 50=91.64% 00:36:16.501 cpu : usr=98.82%, sys=0.92%, ctx=8, majf=0, minf=33 00:36:16.502 IO depths : 1=4.5%, 2=9.6%, 4=21.7%, 8=56.2%, 16=8.0%, 32=0.0%, >=64=0.0% 00:36:16.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.502 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.502 issued rwts: total=6794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.502 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:16.502 00:36:16.502 Run status group 0 (all jobs): 00:36:16.502 READ: bw=62.7MiB/s (65.7MB/s), 2647KiB/s-2730KiB/s (2710kB/s-2795kB/s), io=628MiB (658MB), run=10001-10018msec 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.502 bdev_null0 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.502 [2024-12-09 12:11:22.750508] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.502 bdev_null1 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:36:16.502 { 00:36:16.502 "params": { 00:36:16.502 "name": "Nvme$subsystem", 00:36:16.502 "trtype": "$TEST_TRANSPORT", 00:36:16.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:16.502 "adrfam": "ipv4", 00:36:16.502 "trsvcid": "$NVMF_PORT", 00:36:16.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:16.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:16.502 "hdgst": ${hdgst:-false}, 00:36:16.502 "ddgst": ${ddgst:-false} 00:36:16.502 }, 00:36:16.502 "method": "bdev_nvme_attach_controller" 00:36:16.502 } 00:36:16.502 EOF 00:36:16.502 )") 00:36:16.502 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:36:16.503 { 00:36:16.503 "params": { 00:36:16.503 "name": "Nvme$subsystem", 00:36:16.503 "trtype": "$TEST_TRANSPORT", 00:36:16.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:16.503 "adrfam": "ipv4", 00:36:16.503 "trsvcid": "$NVMF_PORT", 00:36:16.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:16.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:16.503 "hdgst": ${hdgst:-false}, 00:36:16.503 "ddgst": ${ddgst:-false} 00:36:16.503 }, 00:36:16.503 "method": "bdev_nvme_attach_controller" 00:36:16.503 } 00:36:16.503 EOF 00:36:16.503 )") 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:36:16.503 "params": { 00:36:16.503 "name": "Nvme0", 00:36:16.503 "trtype": "tcp", 00:36:16.503 "traddr": "10.0.0.2", 00:36:16.503 "adrfam": "ipv4", 00:36:16.503 "trsvcid": "4420", 00:36:16.503 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:16.503 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:16.503 "hdgst": false, 00:36:16.503 "ddgst": false 00:36:16.503 }, 00:36:16.503 "method": "bdev_nvme_attach_controller" 00:36:16.503 },{ 00:36:16.503 "params": { 00:36:16.503 "name": "Nvme1", 00:36:16.503 "trtype": "tcp", 00:36:16.503 "traddr": "10.0.0.2", 00:36:16.503 "adrfam": "ipv4", 00:36:16.503 "trsvcid": "4420", 00:36:16.503 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:16.503 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:16.503 "hdgst": false, 00:36:16.503 "ddgst": false 00:36:16.503 }, 00:36:16.503 "method": "bdev_nvme_attach_controller" 00:36:16.503 }' 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:16.503 12:11:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:16.503 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:16.503 ... 00:36:16.503 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:16.503 ... 00:36:16.503 fio-3.35 00:36:16.503 Starting 4 threads 00:36:21.804 00:36:21.804 filename0: (groupid=0, jobs=1): err= 0: pid=355002: Mon Dec 9 12:11:28 2024 00:36:21.804 read: IOPS=2903, BW=22.7MiB/s (23.8MB/s)(113MiB/5001msec) 00:36:21.804 slat (nsec): min=5476, max=91709, avg=9226.50, stdev=3340.70 00:36:21.804 clat (usec): min=1768, max=5895, avg=2730.11, stdev=265.75 00:36:21.804 lat (usec): min=1776, max=5922, avg=2739.33, stdev=265.80 00:36:21.804 clat percentiles (usec): 00:36:21.804 | 1.00th=[ 2212], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2606], 00:36:21.804 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:36:21.804 | 70.00th=[ 2737], 80.00th=[ 2802], 90.00th=[ 2933], 95.00th=[ 3097], 00:36:21.804 | 99.00th=[ 3884], 99.50th=[ 3982], 99.90th=[ 4293], 99.95th=[ 5800], 00:36:21.804 | 99.99th=[ 5866] 00:36:21.804 bw ( KiB/s): min=22877, max=23680, per=24.76%, avg=23253.00, stdev=229.84, samples=9 00:36:21.804 iops : min= 2859, max= 2960, avg=2906.56, stdev=28.86, samples=9 00:36:21.804 lat (msec) : 2=0.23%, 4=99.32%, 10=0.45% 00:36:21.804 cpu : usr=90.74%, sys=6.02%, ctx=477, majf=0, minf=9 00:36:21.804 IO depths : 1=0.1%, 2=0.1%, 4=72.3%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:21.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:21.804 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:21.804 issued rwts: total=14522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:21.804 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:21.804 filename0: (groupid=0, jobs=1): err= 0: pid=355003: Mon Dec 9 12:11:28 2024 00:36:21.804 read: IOPS=2955, BW=23.1MiB/s (24.2MB/s)(115MiB/5001msec) 00:36:21.804 slat (nsec): min=5481, max=76201, avg=8596.78, stdev=2763.96 00:36:21.804 clat (usec): min=1410, max=4284, avg=2686.40, stdev=234.62 00:36:21.804 lat (usec): min=1419, max=4290, avg=2694.99, stdev=234.69 00:36:21.804 clat percentiles (usec): 00:36:21.804 | 1.00th=[ 2040], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2540], 00:36:21.804 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:36:21.804 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2933], 95.00th=[ 2999], 00:36:21.804 | 99.00th=[ 3589], 99.50th=[ 3752], 99.90th=[ 4113], 99.95th=[ 4228], 00:36:21.804 | 99.99th=[ 4293] 00:36:21.804 bw ( KiB/s): min=23424, max=23856, per=25.15%, avg=23621.22, stdev=142.72, samples=9 00:36:21.804 iops : min= 2928, max= 2982, avg=2952.56, stdev=17.90, samples=9 00:36:21.804 lat (msec) : 2=0.74%, 4=99.11%, 10=0.16% 00:36:21.804 cpu : usr=95.98%, sys=3.76%, ctx=6, majf=0, minf=9 00:36:21.804 IO depths : 1=0.1%, 2=0.3%, 4=67.1%, 8=32.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:21.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:21.804 complete : 0=0.0%, 4=96.4%, 8=3.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:21.804 issued rwts: total=14781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:21.804 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:21.804 filename1: (groupid=0, jobs=1): err= 0: pid=355004: Mon Dec 9 12:11:28 2024 00:36:21.804 read: IOPS=2828, BW=22.1MiB/s (23.2MB/s)(110MiB/5001msec) 00:36:21.804 slat (usec): min=5, max=100, avg= 8.21, stdev= 2.72 00:36:21.804 clat (usec): min=1188, max=7582, avg=2805.96, stdev=378.49 00:36:21.804 lat (usec): min=1193, max=7621, avg=2814.16, stdev=378.51 00:36:21.804 clat percentiles (usec): 00:36:21.804 | 1.00th=[ 2311], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2638], 00:36:21.804 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:36:21.804 | 70.00th=[ 2769], 80.00th=[ 2933], 90.00th=[ 3064], 95.00th=[ 3851], 00:36:21.804 | 99.00th=[ 4228], 99.50th=[ 4424], 99.90th=[ 4948], 99.95th=[ 7177], 00:36:21.804 | 99.99th=[ 7308] 00:36:21.804 bw ( KiB/s): min=21488, max=23104, per=24.18%, avg=22707.56, stdev=493.33, samples=9 00:36:21.804 iops : min= 2686, max= 2888, avg=2838.44, stdev=61.67, samples=9 00:36:21.804 lat (msec) : 2=0.17%, 4=97.23%, 10=2.60% 00:36:21.804 cpu : usr=95.90%, sys=3.86%, ctx=7, majf=0, minf=9 00:36:21.804 IO depths : 1=0.1%, 2=0.2%, 4=73.7%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:21.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:21.804 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:21.804 issued rwts: total=14143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:21.804 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:21.804 filename1: (groupid=0, jobs=1): err= 0: pid=355005: Mon Dec 9 12:11:28 2024 00:36:21.804 read: IOPS=3052, BW=23.8MiB/s (25.0MB/s)(119MiB/5002msec) 00:36:21.804 slat (nsec): min=5466, max=77608, avg=7845.86, stdev=1992.45 00:36:21.805 clat (usec): min=1152, max=4590, avg=2599.87, stdev=310.25 00:36:21.805 lat (usec): min=1160, max=4599, avg=2607.71, stdev=310.39 00:36:21.805 clat percentiles (usec): 00:36:21.805 | 1.00th=[ 1860], 5.00th=[ 2057], 10.00th=[ 2212], 20.00th=[ 2376], 00:36:21.805 | 30.00th=[ 2507], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:36:21.805 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2769], 95.00th=[ 3228], 00:36:21.805 | 99.00th=[ 3621], 99.50th=[ 3654], 99.90th=[ 3851], 99.95th=[ 3884], 00:36:21.805 | 99.99th=[ 4555] 00:36:21.805 bw ( KiB/s): min=23888, max=25264, per=25.93%, avg=24352.00, stdev=403.19, samples=9 00:36:21.805 iops : min= 2986, max= 3158, avg=3044.00, stdev=50.40, samples=9 00:36:21.805 lat (msec) : 2=2.56%, 4=97.43%, 10=0.01% 00:36:21.805 cpu : usr=96.88%, sys=2.86%, ctx=8, majf=0, minf=9 00:36:21.805 IO depths : 1=0.1%, 2=0.4%, 4=71.0%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:21.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:21.805 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:21.805 issued rwts: total=15270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:21.805 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:21.805 00:36:21.805 Run status group 0 (all jobs): 00:36:21.805 READ: bw=91.7MiB/s (96.2MB/s), 22.1MiB/s-23.8MiB/s (23.2MB/s-25.0MB/s), io=459MiB (481MB), run=5001-5002msec 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.805 00:36:21.805 real 0m24.268s 00:36:21.805 user 5m16.389s 00:36:21.805 sys 0m4.850s 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:21.805 12:11:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:21.805 ************************************ 00:36:21.805 END TEST fio_dif_rand_params 00:36:21.805 ************************************ 00:36:21.805 12:11:29 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:21.805 12:11:29 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:21.805 12:11:29 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:21.805 12:11:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:21.805 ************************************ 00:36:21.805 START TEST fio_dif_digest 00:36:21.805 ************************************ 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:21.805 bdev_null0 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:21.805 [2024-12-09 12:11:29.260066] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:36:21.805 { 00:36:21.805 "params": { 00:36:21.805 "name": "Nvme$subsystem", 00:36:21.805 "trtype": "$TEST_TRANSPORT", 00:36:21.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:21.805 "adrfam": "ipv4", 00:36:21.805 "trsvcid": "$NVMF_PORT", 00:36:21.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:21.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:21.805 "hdgst": ${hdgst:-false}, 00:36:21.805 "ddgst": ${ddgst:-false} 00:36:21.805 }, 00:36:21.805 "method": "bdev_nvme_attach_controller" 00:36:21.805 } 00:36:21.805 EOF 00:36:21.805 )") 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:21.805 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:21.806 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:36:21.806 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:21.806 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:21.806 12:11:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:36:21.806 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:21.806 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:21.806 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:36:21.806 12:11:29 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:21.806 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:21.806 12:11:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:36:21.806 12:11:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:36:21.806 12:11:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:36:21.806 "params": { 00:36:21.806 "name": "Nvme0", 00:36:21.806 "trtype": "tcp", 00:36:21.806 "traddr": "10.0.0.2", 00:36:21.806 "adrfam": "ipv4", 00:36:21.806 "trsvcid": "4420", 00:36:21.806 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:21.806 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:21.806 "hdgst": true, 00:36:21.806 "ddgst": true 00:36:21.806 }, 00:36:21.806 "method": "bdev_nvme_attach_controller" 00:36:21.806 }' 00:36:21.806 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:21.806 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:21.806 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:21.806 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:21.806 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:21.806 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:21.806 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:21.806 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:21.806 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:21.806 12:11:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:22.067 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:22.067 ... 00:36:22.067 fio-3.35 00:36:22.067 Starting 3 threads 00:36:34.306 00:36:34.306 filename0: (groupid=0, jobs=1): err= 0: pid=356496: Mon Dec 9 12:11:40 2024 00:36:34.306 read: IOPS=393, BW=49.2MiB/s (51.6MB/s)(493MiB/10002msec) 00:36:34.306 slat (nsec): min=5872, max=37292, avg=7916.25, stdev=1607.87 00:36:34.306 clat (usec): min=5077, max=10705, avg=7605.69, stdev=1194.14 00:36:34.306 lat (usec): min=5086, max=10712, avg=7613.61, stdev=1194.25 00:36:34.306 clat percentiles (usec): 00:36:34.306 | 1.00th=[ 5473], 5.00th=[ 5800], 10.00th=[ 6063], 20.00th=[ 6390], 00:36:34.306 | 30.00th=[ 6652], 40.00th=[ 7046], 50.00th=[ 7635], 60.00th=[ 8094], 00:36:34.306 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9110], 95.00th=[ 9503], 00:36:34.306 | 99.00th=[10028], 99.50th=[10290], 99.90th=[10683], 99.95th=[10683], 00:36:34.306 | 99.99th=[10683] 00:36:34.306 bw ( KiB/s): min=46336, max=54016, per=44.83%, avg=50405.05, stdev=2223.41, samples=19 00:36:34.306 iops : min= 362, max= 422, avg=393.79, stdev=17.37, samples=19 00:36:34.306 lat (msec) : 10=99.04%, 20=0.96% 00:36:34.306 cpu : usr=95.79%, sys=3.97%, ctx=17, majf=0, minf=137 00:36:34.306 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:34.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.306 issued rwts: total=3940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:34.306 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:34.306 filename0: (groupid=0, jobs=1): err= 0: pid=356497: Mon Dec 9 12:11:40 2024 00:36:34.306 read: IOPS=169, BW=21.2MiB/s (22.3MB/s)(213MiB/10043msec) 00:36:34.306 slat (nsec): min=5932, max=32847, avg=6657.23, stdev=800.14 00:36:34.306 clat (msec): min=7, max=132, avg=17.64, stdev=16.63 00:36:34.306 lat (msec): min=8, max=132, avg=17.64, stdev=16.63 00:36:34.306 clat percentiles (msec): 00:36:34.306 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:36:34.306 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:36:34.306 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 52], 95.00th=[ 52], 00:36:34.306 | 99.00th=[ 91], 99.50th=[ 93], 99.90th=[ 94], 99.95th=[ 133], 00:36:34.306 | 99.99th=[ 133] 00:36:34.306 bw ( KiB/s): min=15360, max=31488, per=19.59%, avg=22029.47, stdev=4971.70, samples=19 00:36:34.306 iops : min= 120, max= 246, avg=172.11, stdev=38.84, samples=19 00:36:34.306 lat (msec) : 10=19.35%, 20=64.63%, 50=1.06%, 100=14.90%, 250=0.06% 00:36:34.306 cpu : usr=95.53%, sys=4.25%, ctx=19, majf=0, minf=119 00:36:34.306 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:34.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.306 issued rwts: total=1705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:34.306 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:34.306 filename0: (groupid=0, jobs=1): err= 0: pid=356498: Mon Dec 9 12:11:40 2024 00:36:34.306 read: IOPS=317, BW=39.7MiB/s (41.6MB/s)(397MiB/10004msec) 00:36:34.306 slat (nsec): min=5839, max=31530, avg=6564.89, stdev=891.34 00:36:34.306 clat (usec): min=4221, max=13381, avg=9440.40, stdev=1490.06 00:36:34.306 lat (usec): min=4227, max=13387, avg=9446.97, stdev=1490.06 00:36:34.306 clat percentiles (usec): 00:36:34.306 | 1.00th=[ 6849], 5.00th=[ 7242], 10.00th=[ 7570], 20.00th=[ 7963], 00:36:34.306 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[10028], 00:36:34.306 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11338], 95.00th=[11863], 00:36:34.306 | 99.00th=[12649], 99.50th=[12780], 99.90th=[13304], 99.95th=[13304], 00:36:34.306 | 99.99th=[13435] 00:36:34.306 bw ( KiB/s): min=36864, max=43264, per=36.05%, avg=40528.84, stdev=1820.33, samples=19 00:36:34.306 iops : min= 288, max= 338, avg=316.63, stdev=14.22, samples=19 00:36:34.306 lat (msec) : 10=59.86%, 20=40.14% 00:36:34.306 cpu : usr=94.29%, sys=5.48%, ctx=26, majf=0, minf=152 00:36:34.306 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:34.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.306 issued rwts: total=3176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:34.306 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:34.306 00:36:34.306 Run status group 0 (all jobs): 00:36:34.306 READ: bw=110MiB/s (115MB/s), 21.2MiB/s-49.2MiB/s (22.3MB/s-51.6MB/s), io=1103MiB (1156MB), run=10002-10043msec 00:36:34.306 12:11:40 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:34.306 12:11:40 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:34.306 12:11:40 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:34.306 12:11:40 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:34.306 12:11:40 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:34.306 12:11:40 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:34.306 12:11:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.306 12:11:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:34.306 12:11:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.306 12:11:40 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:34.306 12:11:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.306 12:11:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:34.306 12:11:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.306 00:36:34.306 real 0m11.220s 00:36:34.306 user 0m41.740s 00:36:34.306 sys 0m1.724s 00:36:34.306 12:11:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:34.306 12:11:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:34.306 ************************************ 00:36:34.306 END TEST fio_dif_digest 00:36:34.306 ************************************ 00:36:34.306 12:11:40 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:34.306 12:11:40 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:34.306 12:11:40 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:34.306 12:11:40 nvmf_dif -- nvmf/common.sh@122 -- # sync 00:36:34.306 12:11:40 nvmf_dif -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:36:34.306 12:11:40 nvmf_dif -- nvmf/common.sh@125 -- # set +e 00:36:34.306 12:11:40 nvmf_dif -- nvmf/common.sh@126 -- # for i in {1..20} 00:36:34.306 12:11:40 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:36:34.306 rmmod nvme_tcp 00:36:34.306 rmmod nvme_fabrics 00:36:34.306 rmmod nvme_keyring 00:36:34.306 12:11:40 nvmf_dif -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:36:34.306 12:11:40 nvmf_dif -- nvmf/common.sh@129 -- # set -e 00:36:34.306 12:11:40 nvmf_dif -- nvmf/common.sh@130 -- # return 0 00:36:34.306 12:11:40 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 346233 ']' 00:36:34.306 12:11:40 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 346233 00:36:34.306 12:11:40 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 346233 ']' 00:36:34.306 12:11:40 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 346233 00:36:34.307 12:11:40 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:36:34.307 12:11:40 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:34.307 12:11:40 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 346233 00:36:34.307 12:11:40 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:34.307 12:11:40 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:34.307 12:11:40 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 346233' 00:36:34.307 killing process with pid 346233 00:36:34.307 12:11:40 nvmf_dif -- common/autotest_common.sh@973 -- # kill 346233 00:36:34.307 12:11:40 nvmf_dif -- common/autotest_common.sh@978 -- # wait 346233 00:36:34.307 12:11:40 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:36:34.307 12:11:40 nvmf_dif -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:36.226 Waiting for block devices as requested 00:36:36.226 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:36.487 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:36.487 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:36.487 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:36.487 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:36.748 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:36.748 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:36.748 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:37.008 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:37.009 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:37.269 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:37.269 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:37.269 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:37.530 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:37.530 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:37.530 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:37.791 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:38.052 12:11:45 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:38.052 12:11:45 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:38.052 12:11:45 nvmf_dif -- nvmf/common.sh@298 -- # iptr 00:36:38.052 12:11:45 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:36:38.052 12:11:45 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:38.052 12:11:45 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:36:38.052 12:11:45 nvmf_dif -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:38.052 12:11:45 nvmf_dif -- nvmf/common.sh@303 -- # remove_spdk_ns 00:36:38.052 12:11:45 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:38.052 12:11:45 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:38.052 12:11:45 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:39.968 12:11:47 nvmf_dif -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:36:39.968 00:36:39.968 real 1m17.729s 00:36:39.968 user 8m5.267s 00:36:39.968 sys 0m21.746s 00:36:39.968 12:11:47 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:39.968 12:11:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:39.968 ************************************ 00:36:39.968 END TEST nvmf_dif 00:36:39.968 ************************************ 00:36:40.230 12:11:47 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:40.230 12:11:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:40.230 12:11:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:40.230 12:11:47 -- common/autotest_common.sh@10 -- # set +x 00:36:40.230 ************************************ 00:36:40.230 START TEST nvmf_abort_qd_sizes 00:36:40.230 ************************************ 00:36:40.230 12:11:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:40.230 * Looking for test storage... 00:36:40.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:40.230 12:11:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:40.230 12:11:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:36:40.230 12:11:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:40.230 12:11:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:40.230 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:40.230 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:40.230 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:40.230 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:36:40.230 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:36:40.230 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:36:40.230 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:40.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.231 --rc genhtml_branch_coverage=1 00:36:40.231 --rc genhtml_function_coverage=1 00:36:40.231 --rc genhtml_legend=1 00:36:40.231 --rc geninfo_all_blocks=1 00:36:40.231 --rc geninfo_unexecuted_blocks=1 00:36:40.231 00:36:40.231 ' 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:40.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.231 --rc genhtml_branch_coverage=1 00:36:40.231 --rc genhtml_function_coverage=1 00:36:40.231 --rc genhtml_legend=1 00:36:40.231 --rc geninfo_all_blocks=1 00:36:40.231 --rc geninfo_unexecuted_blocks=1 00:36:40.231 00:36:40.231 ' 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:40.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.231 --rc genhtml_branch_coverage=1 00:36:40.231 --rc genhtml_function_coverage=1 00:36:40.231 --rc genhtml_legend=1 00:36:40.231 --rc geninfo_all_blocks=1 00:36:40.231 --rc geninfo_unexecuted_blocks=1 00:36:40.231 00:36:40.231 ' 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:40.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.231 --rc genhtml_branch_coverage=1 00:36:40.231 --rc genhtml_function_coverage=1 00:36:40.231 --rc genhtml_legend=1 00:36:40.231 --rc geninfo_all_blocks=1 00:36:40.231 --rc geninfo_unexecuted_blocks=1 00:36:40.231 00:36:40.231 ' 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:40.231 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # : 0 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:36:40.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@56 -- # have_pci_nics=0 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # xtrace_disable 00:36:40.493 12:11:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_devs=() 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_devs 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_net_devs=() 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -a pci_net_devs 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # pci_drivers=() 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # local -A pci_drivers 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # net_devs=() 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga net_devs 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # e810=() 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga e810 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # x722=() 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga x722 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@323 -- # mlx=() 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@323 -- # local -ga mlx 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@331 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@333 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@337 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@345 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # pci_devs+=("${e810[@]}") 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@348 -- # [[ tcp == rdma ]] 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@354 -- # [[ e810 == mlx5 ]] 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # [[ e810 == e810 ]] 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@357 -- # pci_devs=("${e810[@]}") 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@362 -- # (( 2 == 0 )) 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:47.083 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # for pci in "${pci_devs[@]}" 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:47.083 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:47.083 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@369 -- # [[ ice == unknown ]] 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@373 -- # [[ ice == unbound ]] 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@379 -- # [[ tcp == rdma ]] 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@393 -- # (( 0 > 0 )) 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # [[ e810 == e810 ]] 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # [[ tcp == rdma ]] 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:47.084 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:47.084 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # is_hw=yes 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@257 -- # (( 2 > 1 )) 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_TARGET_IP= 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # NVMF_SECOND_INITIATOR_IP= 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_0 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@269 -- # ip -4 addr flush cvl_0_1 00:36:47.084 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@272 -- # ip netns add cvl_0_0_ns_spdk 00:36:47.345 12:11:54 nvmf_abort_qd_sizes -- nvmf/common.sh@275 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:47.345 12:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:47.345 12:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:47.345 12:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@282 -- # ip link set cvl_0_1 up 00:36:47.345 12:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:47.345 12:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:47.345 12:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@288 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:47.345 12:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:47.345 12:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ping -c 1 10.0.0.2 00:36:47.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:47.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:36:47.604 00:36:47.604 --- 10.0.0.2 ping statistics --- 00:36:47.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:47.604 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:36:47.604 12:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:47.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:47.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:36:47.604 00:36:47.604 --- 10.0.0.1 ping statistics --- 00:36:47.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:47.604 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:36:47.604 12:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@294 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:47.604 12:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # return 0 00:36:47.604 12:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:36:47.604 12:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:51.146 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:51.146 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:51.146 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:51.146 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:51.146 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:51.146 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:51.146 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:51.146 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:51.146 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:51.146 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:51.146 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:51.146 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:51.146 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:51.146 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:51.146 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:51.146 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:51.146 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:51.407 12:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:51.407 12:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:51.407 12:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:51.407 12:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:51.407 12:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:51.407 12:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:51.407 12:11:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:51.407 12:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:51.407 12:11:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:51.407 12:11:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:51.407 12:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=365943 00:36:51.407 12:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 365943 00:36:51.407 12:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:51.407 12:11:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 365943 ']' 00:36:51.407 12:11:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:51.407 12:11:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:51.407 12:11:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:51.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:51.407 12:11:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:51.407 12:11:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:51.407 [2024-12-09 12:11:59.222344] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:36:51.407 [2024-12-09 12:11:59.222403] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:51.669 [2024-12-09 12:11:59.315770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:51.669 [2024-12-09 12:11:59.354682] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:51.669 [2024-12-09 12:11:59.354718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:51.669 [2024-12-09 12:11:59.354726] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:51.669 [2024-12-09 12:11:59.354733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:51.669 [2024-12-09 12:11:59.354739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:51.669 [2024-12-09 12:11:59.356726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:51.669 [2024-12-09 12:11:59.356892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:51.669 [2024-12-09 12:11:59.357048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:51.669 [2024-12-09 12:11:59.357048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:52.242 12:12:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:52.242 ************************************ 00:36:52.242 START TEST spdk_target_abort 00:36:52.242 ************************************ 00:36:52.242 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:36:52.242 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:52.242 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:36:52.242 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.242 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:52.814 spdk_targetn1 00:36:52.814 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.814 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:52.814 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.814 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:52.814 [2024-12-09 12:12:00.409868] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:52.814 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.814 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:52.814 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.814 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:52.814 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:52.815 [2024-12-09 12:12:00.462205] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:52.815 12:12:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:53.075 [2024-12-09 12:12:00.721488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:32 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:36:53.075 [2024-12-09 12:12:00.721514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:36:53.075 [2024-12-09 12:12:00.729116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:248 len:8 PRP1 0x200004abe000 PRP2 0x0 00:36:53.075 [2024-12-09 12:12:00.729132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0021 p:1 m:0 dnr:0 00:36:53.075 [2024-12-09 12:12:00.799105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2656 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:36:53.075 [2024-12-09 12:12:00.799123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:56.375 Initializing NVMe Controllers 00:36:56.375 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:56.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:56.375 Initialization complete. Launching workers. 00:36:56.375 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13172, failed: 3 00:36:56.375 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3356, failed to submit 9819 00:36:56.375 success 760, unsuccessful 2596, failed 0 00:36:56.375 12:12:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:56.375 12:12:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:56.375 [2024-12-09 12:12:03.903603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:488 len:8 PRP1 0x200004e50000 PRP2 0x0 00:36:56.375 [2024-12-09 12:12:03.903648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:36:56.375 [2024-12-09 12:12:03.963759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:1576 len:8 PRP1 0x200004e54000 PRP2 0x0 00:36:56.375 [2024-12-09 12:12:03.963786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:00c8 p:1 m:0 dnr:0 00:36:56.375 [2024-12-09 12:12:04.036677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:3440 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:36:56.375 [2024-12-09 12:12:04.036702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:00af p:0 m:0 dnr:0 00:36:56.375 [2024-12-09 12:12:04.036939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:3448 len:8 PRP1 0x200004e42000 PRP2 0x0 00:36:56.375 [2024-12-09 12:12:04.036949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:00bd p:0 m:0 dnr:0 00:36:56.375 [2024-12-09 12:12:04.052642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:3848 len:8 PRP1 0x200004e3e000 PRP2 0x0 00:36:56.375 [2024-12-09 12:12:04.052664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:00e9 p:0 m:0 dnr:0 00:36:58.287 [2024-12-09 12:12:06.105768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:49608 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:36:58.287 [2024-12-09 12:12:06.105807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:003c p:1 m:0 dnr:0 00:36:58.859 [2024-12-09 12:12:06.560298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:60128 len:8 PRP1 0x200004e50000 PRP2 0x0 00:36:58.859 [2024-12-09 12:12:06.560322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:59.429 Initializing NVMe Controllers 00:36:59.429 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:59.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:59.429 Initialization complete. Launching workers. 00:36:59.429 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8424, failed: 7 00:36:59.429 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1186, failed to submit 7245 00:36:59.429 success 323, unsuccessful 863, failed 0 00:36:59.429 12:12:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:59.429 12:12:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:01.974 [2024-12-09 12:12:09.449526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:171 nsid:1 lba:245192 len:8 PRP1 0x200004ace000 PRP2 0x0 00:37:01.974 [2024-12-09 12:12:09.449555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:171 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:01.974 [2024-12-09 12:12:09.538249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:173 nsid:1 lba:255432 len:8 PRP1 0x200004b24000 PRP2 0x0 00:37:01.974 [2024-12-09 12:12:09.538271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:173 cdw0:0 sqhd:0039 p:1 m:0 dnr:0 00:37:02.543 Initializing NVMe Controllers 00:37:02.543 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:02.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:02.543 Initialization complete. Launching workers. 00:37:02.543 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43511, failed: 2 00:37:02.543 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2639, failed to submit 40874 00:37:02.543 success 587, unsuccessful 2052, failed 0 00:37:02.543 12:12:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:02.543 12:12:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.543 12:12:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:02.543 12:12:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.543 12:12:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:02.543 12:12:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.543 12:12:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:04.455 12:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.455 12:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 365943 00:37:04.455 12:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 365943 ']' 00:37:04.455 12:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 365943 00:37:04.455 12:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:37:04.455 12:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:04.455 12:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 365943 00:37:04.455 12:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:04.455 12:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:04.455 12:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 365943' 00:37:04.455 killing process with pid 365943 00:37:04.455 12:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 365943 00:37:04.455 12:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 365943 00:37:04.717 00:37:04.717 real 0m12.285s 00:37:04.717 user 0m49.999s 00:37:04.717 sys 0m1.994s 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:04.717 ************************************ 00:37:04.717 END TEST spdk_target_abort 00:37:04.717 ************************************ 00:37:04.717 12:12:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:04.717 12:12:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:04.717 12:12:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:04.717 12:12:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:04.717 ************************************ 00:37:04.717 START TEST kernel_target_abort 00:37:04.717 ************************************ 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:37:04.717 12:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:04.718 12:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:08.023 Waiting for block devices as requested 00:37:08.023 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:08.284 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:08.284 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:08.284 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:08.545 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:08.545 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:08.545 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:08.806 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:08.806 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:09.067 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:09.067 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:09.067 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:09.328 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:09.328 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:09.328 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:09.588 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:09.588 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:09.848 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:37:09.848 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:09.848 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:37:09.848 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:37:09.848 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:09.848 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:37:09.848 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:37:09.848 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:09.848 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:09.848 No valid GPT data, bailing 00:37:09.848 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:10.107 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:10.107 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:37:10.108 00:37:10.108 Discovery Log Number of Records 2, Generation counter 2 00:37:10.108 =====Discovery Log Entry 0====== 00:37:10.108 trtype: tcp 00:37:10.108 adrfam: ipv4 00:37:10.108 subtype: current discovery subsystem 00:37:10.108 treq: not specified, sq flow control disable supported 00:37:10.108 portid: 1 00:37:10.108 trsvcid: 4420 00:37:10.108 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:10.108 traddr: 10.0.0.1 00:37:10.108 eflags: none 00:37:10.108 sectype: none 00:37:10.108 =====Discovery Log Entry 1====== 00:37:10.108 trtype: tcp 00:37:10.108 adrfam: ipv4 00:37:10.108 subtype: nvme subsystem 00:37:10.108 treq: not specified, sq flow control disable supported 00:37:10.108 portid: 1 00:37:10.108 trsvcid: 4420 00:37:10.108 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:10.108 traddr: 10.0.0.1 00:37:10.108 eflags: none 00:37:10.108 sectype: none 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:10.108 12:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:13.405 Initializing NVMe Controllers 00:37:13.405 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:13.405 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:13.405 Initialization complete. Launching workers. 00:37:13.405 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66904, failed: 0 00:37:13.405 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 66904, failed to submit 0 00:37:13.405 success 0, unsuccessful 66904, failed 0 00:37:13.405 12:12:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:13.405 12:12:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:16.706 Initializing NVMe Controllers 00:37:16.706 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:16.706 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:16.706 Initialization complete. Launching workers. 00:37:16.706 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 117350, failed: 0 00:37:16.706 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29530, failed to submit 87820 00:37:16.706 success 0, unsuccessful 29530, failed 0 00:37:16.706 12:12:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:16.706 12:12:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:20.004 Initializing NVMe Controllers 00:37:20.004 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:20.004 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:20.004 Initialization complete. Launching workers. 00:37:20.004 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145324, failed: 0 00:37:20.004 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36366, failed to submit 108958 00:37:20.004 success 0, unsuccessful 36366, failed 0 00:37:20.004 12:12:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:20.004 12:12:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:20.004 12:12:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:37:20.004 12:12:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:20.004 12:12:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:20.004 12:12:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:20.004 12:12:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:20.004 12:12:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:37:20.004 12:12:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:37:20.004 12:12:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:23.310 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:23.310 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:23.310 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:23.310 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:23.310 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:23.310 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:23.310 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:23.310 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:23.310 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:23.310 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:23.310 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:23.310 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:23.310 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:23.310 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:23.310 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:23.310 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:25.225 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:25.225 00:37:25.225 real 0m20.530s 00:37:25.225 user 0m9.968s 00:37:25.225 sys 0m6.130s 00:37:25.225 12:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:25.225 12:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:25.225 ************************************ 00:37:25.225 END TEST kernel_target_abort 00:37:25.225 ************************************ 00:37:25.225 12:12:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:25.225 12:12:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:25.225 12:12:33 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:25.225 12:12:33 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # sync 00:37:25.225 12:12:33 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # '[' tcp == tcp ']' 00:37:25.225 12:12:33 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # set +e 00:37:25.225 12:12:33 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # for i in {1..20} 00:37:25.225 12:12:33 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-tcp 00:37:25.225 rmmod nvme_tcp 00:37:25.225 rmmod nvme_fabrics 00:37:25.225 rmmod nvme_keyring 00:37:25.225 12:12:33 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # modprobe -v -r nvme-fabrics 00:37:25.225 12:12:33 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # set -e 00:37:25.225 12:12:33 nvmf_abort_qd_sizes -- nvmf/common.sh@130 -- # return 0 00:37:25.225 12:12:33 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 365943 ']' 00:37:25.486 12:12:33 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 365943 00:37:25.486 12:12:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 365943 ']' 00:37:25.486 12:12:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 365943 00:37:25.486 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (365943) - No such process 00:37:25.486 12:12:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 365943 is not found' 00:37:25.486 Process with pid 365943 is not found 00:37:25.486 12:12:33 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:37:25.486 12:12:33 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:28.786 Waiting for block devices as requested 00:37:28.786 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:28.786 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:28.786 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:29.048 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:29.048 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:29.048 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:29.309 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:29.309 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:29.309 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:29.570 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:29.570 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:29.831 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:29.831 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:29.831 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:30.092 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:30.092 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:30.092 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:30.353 12:12:38 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:30.353 12:12:38 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:30.353 12:12:38 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # iptr 00:37:30.353 12:12:38 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:37:30.353 12:12:38 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:30.353 12:12:38 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:37:30.353 12:12:38 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:30.353 12:12:38 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # remove_spdk_ns 00:37:30.353 12:12:38 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:30.353 12:12:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:30.354 12:12:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:32.902 12:12:40 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # ip -4 addr flush cvl_0_1 00:37:32.902 00:37:32.902 real 0m52.402s 00:37:32.902 user 1m5.250s 00:37:32.902 sys 0m19.033s 00:37:32.902 12:12:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:32.902 12:12:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:32.902 ************************************ 00:37:32.902 END TEST nvmf_abort_qd_sizes 00:37:32.902 ************************************ 00:37:32.902 12:12:40 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:32.902 12:12:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:32.902 12:12:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:32.902 12:12:40 -- common/autotest_common.sh@10 -- # set +x 00:37:32.902 ************************************ 00:37:32.902 START TEST keyring_file 00:37:32.902 ************************************ 00:37:32.902 12:12:40 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:32.902 * Looking for test storage... 00:37:32.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:32.902 12:12:40 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:32.902 12:12:40 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:37:32.902 12:12:40 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:32.902 12:12:40 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:32.902 12:12:40 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@345 -- # : 1 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@353 -- # local d=1 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@355 -- # echo 1 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@353 -- # local d=2 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@355 -- # echo 2 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@368 -- # return 0 00:37:32.903 12:12:40 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:32.903 12:12:40 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:32.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:32.903 --rc genhtml_branch_coverage=1 00:37:32.903 --rc genhtml_function_coverage=1 00:37:32.903 --rc genhtml_legend=1 00:37:32.903 --rc geninfo_all_blocks=1 00:37:32.903 --rc geninfo_unexecuted_blocks=1 00:37:32.903 00:37:32.903 ' 00:37:32.903 12:12:40 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:32.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:32.903 --rc genhtml_branch_coverage=1 00:37:32.903 --rc genhtml_function_coverage=1 00:37:32.903 --rc genhtml_legend=1 00:37:32.903 --rc geninfo_all_blocks=1 00:37:32.903 --rc geninfo_unexecuted_blocks=1 00:37:32.903 00:37:32.903 ' 00:37:32.903 12:12:40 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:32.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:32.903 --rc genhtml_branch_coverage=1 00:37:32.903 --rc genhtml_function_coverage=1 00:37:32.903 --rc genhtml_legend=1 00:37:32.903 --rc geninfo_all_blocks=1 00:37:32.903 --rc geninfo_unexecuted_blocks=1 00:37:32.903 00:37:32.903 ' 00:37:32.903 12:12:40 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:32.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:32.903 --rc genhtml_branch_coverage=1 00:37:32.903 --rc genhtml_function_coverage=1 00:37:32.903 --rc genhtml_legend=1 00:37:32.903 --rc geninfo_all_blocks=1 00:37:32.903 --rc geninfo_unexecuted_blocks=1 00:37:32.903 00:37:32.903 ' 00:37:32.903 12:12:40 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:32.903 12:12:40 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:32.903 12:12:40 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:32.903 12:12:40 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:32.903 12:12:40 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:32.903 12:12:40 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:32.903 12:12:40 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:32.903 12:12:40 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@52 -- # : 0 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:37:32.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:37:32.903 12:12:40 keyring_file -- nvmf/common.sh@56 -- # have_pci_nics=0 00:37:32.903 12:12:40 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:32.903 12:12:40 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:32.903 12:12:40 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:32.903 12:12:40 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:32.903 12:12:40 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:32.903 12:12:40 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:32.903 12:12:40 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:32.903 12:12:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:32.903 12:12:40 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:32.904 12:12:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:32.904 12:12:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:32.904 12:12:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:32.904 12:12:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Fr4Hk1qXb8 00:37:32.904 12:12:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:32.904 12:12:40 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:32.904 12:12:40 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:37:32.904 12:12:40 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:37:32.904 12:12:40 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:37:32.904 12:12:40 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:37:32.904 12:12:40 keyring_file -- nvmf/common.sh@729 -- # python - 00:37:32.904 12:12:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Fr4Hk1qXb8 00:37:32.904 12:12:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Fr4Hk1qXb8 00:37:32.904 12:12:40 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Fr4Hk1qXb8 00:37:32.904 12:12:40 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:32.904 12:12:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:32.904 12:12:40 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:32.904 12:12:40 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:32.904 12:12:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:32.904 12:12:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:32.904 12:12:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NPaqVI7ALG 00:37:32.904 12:12:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:32.904 12:12:40 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:32.904 12:12:40 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:37:32.904 12:12:40 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:37:32.904 12:12:40 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:37:32.904 12:12:40 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:37:32.904 12:12:40 keyring_file -- nvmf/common.sh@729 -- # python - 00:37:32.904 12:12:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NPaqVI7ALG 00:37:32.904 12:12:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NPaqVI7ALG 00:37:32.904 12:12:40 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.NPaqVI7ALG 00:37:32.904 12:12:40 keyring_file -- keyring/file.sh@30 -- # tgtpid=376690 00:37:32.904 12:12:40 keyring_file -- keyring/file.sh@32 -- # waitforlisten 376690 00:37:32.904 12:12:40 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:32.904 12:12:40 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 376690 ']' 00:37:32.904 12:12:40 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:32.904 12:12:40 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:32.904 12:12:40 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:32.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:32.904 12:12:40 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:32.904 12:12:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:33.165 [2024-12-09 12:12:40.812306] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:37:33.165 [2024-12-09 12:12:40.812379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid376690 ] 00:37:33.165 [2024-12-09 12:12:40.906219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:33.165 [2024-12-09 12:12:40.960027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:33.738 12:12:41 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:33.738 12:12:41 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:33.738 12:12:41 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:33.738 12:12:41 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.738 12:12:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:34.000 [2024-12-09 12:12:41.625577] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:34.000 null0 00:37:34.000 [2024-12-09 12:12:41.657607] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:34.000 [2024-12-09 12:12:41.658179] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:34.000 12:12:41 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.000 12:12:41 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:34.000 12:12:41 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:34.000 12:12:41 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:34.000 12:12:41 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:34.000 12:12:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:34.000 12:12:41 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:34.000 12:12:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:34.000 12:12:41 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:34.000 12:12:41 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.000 12:12:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:34.000 [2024-12-09 12:12:41.689677] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:34.000 request: 00:37:34.000 { 00:37:34.000 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:34.000 "secure_channel": false, 00:37:34.000 "listen_address": { 00:37:34.000 "trtype": "tcp", 00:37:34.000 "traddr": "127.0.0.1", 00:37:34.000 "trsvcid": "4420" 00:37:34.000 }, 00:37:34.000 "method": "nvmf_subsystem_add_listener", 00:37:34.000 "req_id": 1 00:37:34.000 } 00:37:34.000 Got JSON-RPC error response 00:37:34.000 response: 00:37:34.000 { 00:37:34.000 "code": -32602, 00:37:34.000 "message": "Invalid parameters" 00:37:34.000 } 00:37:34.000 12:12:41 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:34.000 12:12:41 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:34.000 12:12:41 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:34.000 12:12:41 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:34.000 12:12:41 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:34.000 12:12:41 keyring_file -- keyring/file.sh@47 -- # bperfpid=376736 00:37:34.000 12:12:41 keyring_file -- keyring/file.sh@49 -- # waitforlisten 376736 /var/tmp/bperf.sock 00:37:34.000 12:12:41 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 376736 ']' 00:37:34.000 12:12:41 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:34.000 12:12:41 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:34.000 12:12:41 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:34.000 12:12:41 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:34.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:34.000 12:12:41 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:34.000 12:12:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:34.000 [2024-12-09 12:12:41.751593] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:37:34.000 [2024-12-09 12:12:41.751665] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid376736 ] 00:37:34.000 [2024-12-09 12:12:41.832560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:34.261 [2024-12-09 12:12:41.884601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:34.835 12:12:42 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:34.835 12:12:42 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:34.835 12:12:42 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Fr4Hk1qXb8 00:37:34.835 12:12:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Fr4Hk1qXb8 00:37:35.095 12:12:42 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.NPaqVI7ALG 00:37:35.095 12:12:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.NPaqVI7ALG 00:37:35.095 12:12:42 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:37:35.095 12:12:42 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:35.095 12:12:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:35.095 12:12:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:35.095 12:12:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:35.356 12:12:43 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Fr4Hk1qXb8 == \/\t\m\p\/\t\m\p\.\F\r\4\H\k\1\q\X\b\8 ]] 00:37:35.356 12:12:43 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:37:35.356 12:12:43 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:37:35.356 12:12:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:35.356 12:12:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:35.356 12:12:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:35.618 12:12:43 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.NPaqVI7ALG == \/\t\m\p\/\t\m\p\.\N\P\a\q\V\I\7\A\L\G ]] 00:37:35.618 12:12:43 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:37:35.618 12:12:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:35.618 12:12:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:35.618 12:12:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:35.618 12:12:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:35.618 12:12:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:35.879 12:12:43 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:35.879 12:12:43 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:37:35.879 12:12:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:35.879 12:12:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:35.879 12:12:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:35.879 12:12:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:35.879 12:12:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:35.879 12:12:43 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:37:35.879 12:12:43 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:35.879 12:12:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:36.140 [2024-12-09 12:12:43.915458] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:36.140 nvme0n1 00:37:36.140 12:12:44 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:37:36.140 12:12:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:36.140 12:12:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:36.140 12:12:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:36.140 12:12:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:36.140 12:12:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:36.402 12:12:44 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:37:36.402 12:12:44 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:37:36.402 12:12:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:36.402 12:12:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:36.402 12:12:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:36.402 12:12:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:36.402 12:12:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:36.662 12:12:44 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:37:36.662 12:12:44 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:36.662 Running I/O for 1 seconds... 00:37:37.600 20178.00 IOPS, 78.82 MiB/s 00:37:37.600 Latency(us) 00:37:37.600 [2024-12-09T11:12:45.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:37.600 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:37.600 nvme0n1 : 1.00 20228.20 79.02 0.00 0.00 6317.34 3822.93 17585.49 00:37:37.600 [2024-12-09T11:12:45.486Z] =================================================================================================================== 00:37:37.600 [2024-12-09T11:12:45.486Z] Total : 20228.20 79.02 0.00 0.00 6317.34 3822.93 17585.49 00:37:37.600 { 00:37:37.600 "results": [ 00:37:37.600 { 00:37:37.600 "job": "nvme0n1", 00:37:37.600 "core_mask": "0x2", 00:37:37.600 "workload": "randrw", 00:37:37.600 "percentage": 50, 00:37:37.600 "status": "finished", 00:37:37.600 "queue_depth": 128, 00:37:37.600 "io_size": 4096, 00:37:37.600 "runtime": 1.003846, 00:37:37.600 "iops": 20228.202333824112, 00:37:37.600 "mibps": 79.01641536650044, 00:37:37.600 "io_failed": 0, 00:37:37.600 "io_timeout": 0, 00:37:37.600 "avg_latency_us": 6317.340046620047, 00:37:37.600 "min_latency_us": 3822.9333333333334, 00:37:37.600 "max_latency_us": 17585.493333333332 00:37:37.600 } 00:37:37.600 ], 00:37:37.600 "core_count": 1 00:37:37.600 } 00:37:37.860 12:12:45 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:37.860 12:12:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:37.860 12:12:45 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:37:37.860 12:12:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:37.860 12:12:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:37.860 12:12:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:37.860 12:12:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:37.860 12:12:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:38.119 12:12:45 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:38.119 12:12:45 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:37:38.119 12:12:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:38.119 12:12:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:38.119 12:12:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:38.119 12:12:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:38.119 12:12:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:38.378 12:12:46 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:37:38.378 12:12:46 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:38.379 12:12:46 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:38.379 12:12:46 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:38.379 12:12:46 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:38.379 12:12:46 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:38.379 12:12:46 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:38.379 12:12:46 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:38.379 12:12:46 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:38.379 12:12:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:38.379 [2024-12-09 12:12:46.191007] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:38.379 [2024-12-09 12:12:46.191350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2214870 (107): Transport endpoint is not connected 00:37:38.379 [2024-12-09 12:12:46.192345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2214870 (9): Bad file descriptor 00:37:38.379 [2024-12-09 12:12:46.193346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:37:38.379 [2024-12-09 12:12:46.193355] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:38.379 [2024-12-09 12:12:46.193360] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:38.379 [2024-12-09 12:12:46.193367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:37:38.379 request: 00:37:38.379 { 00:37:38.379 "name": "nvme0", 00:37:38.379 "trtype": "tcp", 00:37:38.379 "traddr": "127.0.0.1", 00:37:38.379 "adrfam": "ipv4", 00:37:38.379 "trsvcid": "4420", 00:37:38.379 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:38.379 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:38.379 "prchk_reftag": false, 00:37:38.379 "prchk_guard": false, 00:37:38.379 "hdgst": false, 00:37:38.379 "ddgst": false, 00:37:38.379 "psk": "key1", 00:37:38.379 "allow_unrecognized_csi": false, 00:37:38.379 "method": "bdev_nvme_attach_controller", 00:37:38.379 "req_id": 1 00:37:38.379 } 00:37:38.379 Got JSON-RPC error response 00:37:38.379 response: 00:37:38.379 { 00:37:38.379 "code": -5, 00:37:38.379 "message": "Input/output error" 00:37:38.379 } 00:37:38.379 12:12:46 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:38.379 12:12:46 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:38.379 12:12:46 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:38.379 12:12:46 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:38.379 12:12:46 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:37:38.379 12:12:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:38.379 12:12:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:38.379 12:12:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:38.379 12:12:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:38.379 12:12:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:38.638 12:12:46 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:38.638 12:12:46 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:37:38.638 12:12:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:38.638 12:12:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:38.638 12:12:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:38.638 12:12:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:38.638 12:12:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:38.898 12:12:46 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:37:38.898 12:12:46 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:37:38.898 12:12:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:38.898 12:12:46 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:37:38.898 12:12:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:39.158 12:12:46 keyring_file -- keyring/file.sh@78 -- # jq length 00:37:39.158 12:12:46 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:37:39.158 12:12:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:39.419 12:12:47 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:37:39.419 12:12:47 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.Fr4Hk1qXb8 00:37:39.419 12:12:47 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Fr4Hk1qXb8 00:37:39.419 12:12:47 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:39.419 12:12:47 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Fr4Hk1qXb8 00:37:39.419 12:12:47 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:39.419 12:12:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:39.419 12:12:47 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:39.419 12:12:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:39.419 12:12:47 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Fr4Hk1qXb8 00:37:39.419 12:12:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Fr4Hk1qXb8 00:37:39.419 [2024-12-09 12:12:47.244039] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Fr4Hk1qXb8': 0100660 00:37:39.419 [2024-12-09 12:12:47.244060] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:39.419 request: 00:37:39.419 { 00:37:39.419 "name": "key0", 00:37:39.419 "path": "/tmp/tmp.Fr4Hk1qXb8", 00:37:39.419 "method": "keyring_file_add_key", 00:37:39.419 "req_id": 1 00:37:39.419 } 00:37:39.419 Got JSON-RPC error response 00:37:39.419 response: 00:37:39.419 { 00:37:39.419 "code": -1, 00:37:39.419 "message": "Operation not permitted" 00:37:39.419 } 00:37:39.419 12:12:47 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:39.419 12:12:47 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:39.419 12:12:47 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:39.419 12:12:47 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:39.419 12:12:47 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.Fr4Hk1qXb8 00:37:39.419 12:12:47 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Fr4Hk1qXb8 00:37:39.419 12:12:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Fr4Hk1qXb8 00:37:39.680 12:12:47 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.Fr4Hk1qXb8 00:37:39.680 12:12:47 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:37:39.680 12:12:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:39.680 12:12:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:39.680 12:12:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:39.680 12:12:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:39.680 12:12:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:39.940 12:12:47 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:37:39.940 12:12:47 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:39.940 12:12:47 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:39.940 12:12:47 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:39.940 12:12:47 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:39.940 12:12:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:39.940 12:12:47 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:39.940 12:12:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:39.940 12:12:47 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:39.940 12:12:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:39.940 [2024-12-09 12:12:47.765362] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Fr4Hk1qXb8': No such file or directory 00:37:39.940 [2024-12-09 12:12:47.765378] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:39.940 [2024-12-09 12:12:47.765391] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:39.940 [2024-12-09 12:12:47.765398] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:37:39.940 [2024-12-09 12:12:47.765403] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:39.940 [2024-12-09 12:12:47.765408] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:39.940 request: 00:37:39.940 { 00:37:39.940 "name": "nvme0", 00:37:39.940 "trtype": "tcp", 00:37:39.940 "traddr": "127.0.0.1", 00:37:39.940 "adrfam": "ipv4", 00:37:39.940 "trsvcid": "4420", 00:37:39.940 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:39.940 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:39.940 "prchk_reftag": false, 00:37:39.940 "prchk_guard": false, 00:37:39.940 "hdgst": false, 00:37:39.940 "ddgst": false, 00:37:39.940 "psk": "key0", 00:37:39.940 "allow_unrecognized_csi": false, 00:37:39.940 "method": "bdev_nvme_attach_controller", 00:37:39.940 "req_id": 1 00:37:39.940 } 00:37:39.940 Got JSON-RPC error response 00:37:39.940 response: 00:37:39.940 { 00:37:39.940 "code": -19, 00:37:39.940 "message": "No such device" 00:37:39.940 } 00:37:39.940 12:12:47 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:39.940 12:12:47 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:39.940 12:12:47 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:39.940 12:12:47 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:39.940 12:12:47 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:37:39.940 12:12:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:40.200 12:12:47 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:40.200 12:12:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:40.200 12:12:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:40.200 12:12:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:40.200 12:12:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:40.200 12:12:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:40.200 12:12:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Z5K8G1Xl6P 00:37:40.200 12:12:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:40.200 12:12:47 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:40.200 12:12:47 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:37:40.200 12:12:47 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:37:40.200 12:12:47 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:37:40.200 12:12:47 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:37:40.200 12:12:47 keyring_file -- nvmf/common.sh@729 -- # python - 00:37:40.200 12:12:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Z5K8G1Xl6P 00:37:40.200 12:12:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Z5K8G1Xl6P 00:37:40.200 12:12:48 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.Z5K8G1Xl6P 00:37:40.200 12:12:48 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Z5K8G1Xl6P 00:37:40.200 12:12:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Z5K8G1Xl6P 00:37:40.460 12:12:48 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:40.460 12:12:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:40.718 nvme0n1 00:37:40.718 12:12:48 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:37:40.718 12:12:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:40.718 12:12:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:40.718 12:12:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:40.718 12:12:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:40.718 12:12:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:40.718 12:12:48 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:37:40.718 12:12:48 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:37:40.718 12:12:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:40.977 12:12:48 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:37:40.977 12:12:48 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:37:40.977 12:12:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:40.977 12:12:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:40.977 12:12:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:41.236 12:12:48 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:37:41.236 12:12:48 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:37:41.236 12:12:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:41.236 12:12:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:41.236 12:12:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:41.236 12:12:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:41.236 12:12:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:41.236 12:12:49 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:37:41.236 12:12:49 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:41.236 12:12:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:41.497 12:12:49 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:37:41.497 12:12:49 keyring_file -- keyring/file.sh@105 -- # jq length 00:37:41.497 12:12:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:41.757 12:12:49 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:37:41.757 12:12:49 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Z5K8G1Xl6P 00:37:41.757 12:12:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Z5K8G1Xl6P 00:37:41.757 12:12:49 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.NPaqVI7ALG 00:37:41.757 12:12:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.NPaqVI7ALG 00:37:42.018 12:12:49 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:42.018 12:12:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:42.277 nvme0n1 00:37:42.277 12:12:50 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:37:42.277 12:12:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:42.537 12:12:50 keyring_file -- keyring/file.sh@113 -- # config='{ 00:37:42.537 "subsystems": [ 00:37:42.537 { 00:37:42.537 "subsystem": "keyring", 00:37:42.537 "config": [ 00:37:42.537 { 00:37:42.537 "method": "keyring_file_add_key", 00:37:42.537 "params": { 00:37:42.537 "name": "key0", 00:37:42.537 "path": "/tmp/tmp.Z5K8G1Xl6P" 00:37:42.537 } 00:37:42.537 }, 00:37:42.537 { 00:37:42.537 "method": "keyring_file_add_key", 00:37:42.537 "params": { 00:37:42.537 "name": "key1", 00:37:42.537 "path": "/tmp/tmp.NPaqVI7ALG" 00:37:42.537 } 00:37:42.537 } 00:37:42.537 ] 00:37:42.537 }, 00:37:42.537 { 00:37:42.537 "subsystem": "iobuf", 00:37:42.537 "config": [ 00:37:42.537 { 00:37:42.537 "method": "iobuf_set_options", 00:37:42.537 "params": { 00:37:42.537 "small_pool_count": 8192, 00:37:42.537 "large_pool_count": 1024, 00:37:42.537 "small_bufsize": 8192, 00:37:42.537 "large_bufsize": 135168, 00:37:42.537 "enable_numa": false 00:37:42.537 } 00:37:42.537 } 00:37:42.537 ] 00:37:42.537 }, 00:37:42.537 { 00:37:42.537 "subsystem": "sock", 00:37:42.537 "config": [ 00:37:42.537 { 00:37:42.537 "method": "sock_set_default_impl", 00:37:42.537 "params": { 00:37:42.537 "impl_name": "posix" 00:37:42.537 } 00:37:42.537 }, 00:37:42.537 { 00:37:42.537 "method": "sock_impl_set_options", 00:37:42.537 "params": { 00:37:42.537 "impl_name": "ssl", 00:37:42.537 "recv_buf_size": 4096, 00:37:42.537 "send_buf_size": 4096, 00:37:42.537 "enable_recv_pipe": true, 00:37:42.537 "enable_quickack": false, 00:37:42.537 "enable_placement_id": 0, 00:37:42.537 "enable_zerocopy_send_server": true, 00:37:42.537 "enable_zerocopy_send_client": false, 00:37:42.537 "zerocopy_threshold": 0, 00:37:42.537 "tls_version": 0, 00:37:42.537 "enable_ktls": false 00:37:42.537 } 00:37:42.537 }, 00:37:42.537 { 00:37:42.537 "method": "sock_impl_set_options", 00:37:42.537 "params": { 00:37:42.537 "impl_name": "posix", 00:37:42.537 "recv_buf_size": 2097152, 00:37:42.537 "send_buf_size": 2097152, 00:37:42.537 "enable_recv_pipe": true, 00:37:42.537 "enable_quickack": false, 00:37:42.537 "enable_placement_id": 0, 00:37:42.537 "enable_zerocopy_send_server": true, 00:37:42.537 "enable_zerocopy_send_client": false, 00:37:42.537 "zerocopy_threshold": 0, 00:37:42.537 "tls_version": 0, 00:37:42.537 "enable_ktls": false 00:37:42.537 } 00:37:42.537 } 00:37:42.537 ] 00:37:42.537 }, 00:37:42.537 { 00:37:42.537 "subsystem": "vmd", 00:37:42.537 "config": [] 00:37:42.537 }, 00:37:42.537 { 00:37:42.537 "subsystem": "accel", 00:37:42.537 "config": [ 00:37:42.537 { 00:37:42.537 "method": "accel_set_options", 00:37:42.537 "params": { 00:37:42.537 "small_cache_size": 128, 00:37:42.537 "large_cache_size": 16, 00:37:42.537 "task_count": 2048, 00:37:42.537 "sequence_count": 2048, 00:37:42.537 "buf_count": 2048 00:37:42.537 } 00:37:42.537 } 00:37:42.537 ] 00:37:42.537 }, 00:37:42.537 { 00:37:42.537 "subsystem": "bdev", 00:37:42.537 "config": [ 00:37:42.537 { 00:37:42.537 "method": "bdev_set_options", 00:37:42.537 "params": { 00:37:42.537 "bdev_io_pool_size": 65535, 00:37:42.537 "bdev_io_cache_size": 256, 00:37:42.537 "bdev_auto_examine": true, 00:37:42.537 "iobuf_small_cache_size": 128, 00:37:42.537 "iobuf_large_cache_size": 16 00:37:42.537 } 00:37:42.537 }, 00:37:42.537 { 00:37:42.537 "method": "bdev_raid_set_options", 00:37:42.537 "params": { 00:37:42.537 "process_window_size_kb": 1024, 00:37:42.537 "process_max_bandwidth_mb_sec": 0 00:37:42.537 } 00:37:42.537 }, 00:37:42.537 { 00:37:42.537 "method": "bdev_iscsi_set_options", 00:37:42.537 "params": { 00:37:42.537 "timeout_sec": 30 00:37:42.537 } 00:37:42.537 }, 00:37:42.537 { 00:37:42.537 "method": "bdev_nvme_set_options", 00:37:42.537 "params": { 00:37:42.537 "action_on_timeout": "none", 00:37:42.537 "timeout_us": 0, 00:37:42.537 "timeout_admin_us": 0, 00:37:42.537 "keep_alive_timeout_ms": 10000, 00:37:42.537 "arbitration_burst": 0, 00:37:42.537 "low_priority_weight": 0, 00:37:42.537 "medium_priority_weight": 0, 00:37:42.537 "high_priority_weight": 0, 00:37:42.537 "nvme_adminq_poll_period_us": 10000, 00:37:42.537 "nvme_ioq_poll_period_us": 0, 00:37:42.537 "io_queue_requests": 512, 00:37:42.537 "delay_cmd_submit": true, 00:37:42.537 "transport_retry_count": 4, 00:37:42.537 "bdev_retry_count": 3, 00:37:42.537 "transport_ack_timeout": 0, 00:37:42.537 "ctrlr_loss_timeout_sec": 0, 00:37:42.537 "reconnect_delay_sec": 0, 00:37:42.537 "fast_io_fail_timeout_sec": 0, 00:37:42.537 "disable_auto_failback": false, 00:37:42.537 "generate_uuids": false, 00:37:42.537 "transport_tos": 0, 00:37:42.537 "nvme_error_stat": false, 00:37:42.537 "rdma_srq_size": 0, 00:37:42.537 "io_path_stat": false, 00:37:42.537 "allow_accel_sequence": false, 00:37:42.537 "rdma_max_cq_size": 0, 00:37:42.537 "rdma_cm_event_timeout_ms": 0, 00:37:42.537 "dhchap_digests": [ 00:37:42.537 "sha256", 00:37:42.537 "sha384", 00:37:42.537 "sha512" 00:37:42.537 ], 00:37:42.537 "dhchap_dhgroups": [ 00:37:42.537 "null", 00:37:42.537 "ffdhe2048", 00:37:42.537 "ffdhe3072", 00:37:42.537 "ffdhe4096", 00:37:42.537 "ffdhe6144", 00:37:42.537 "ffdhe8192" 00:37:42.537 ] 00:37:42.537 } 00:37:42.537 }, 00:37:42.537 { 00:37:42.537 "method": "bdev_nvme_attach_controller", 00:37:42.537 "params": { 00:37:42.537 "name": "nvme0", 00:37:42.537 "trtype": "TCP", 00:37:42.537 "adrfam": "IPv4", 00:37:42.537 "traddr": "127.0.0.1", 00:37:42.537 "trsvcid": "4420", 00:37:42.537 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:42.537 "prchk_reftag": false, 00:37:42.537 "prchk_guard": false, 00:37:42.537 "ctrlr_loss_timeout_sec": 0, 00:37:42.537 "reconnect_delay_sec": 0, 00:37:42.537 "fast_io_fail_timeout_sec": 0, 00:37:42.537 "psk": "key0", 00:37:42.537 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:42.537 "hdgst": false, 00:37:42.537 "ddgst": false, 00:37:42.537 "multipath": "multipath" 00:37:42.537 } 00:37:42.537 }, 00:37:42.537 { 00:37:42.537 "method": "bdev_nvme_set_hotplug", 00:37:42.537 "params": { 00:37:42.537 "period_us": 100000, 00:37:42.537 "enable": false 00:37:42.537 } 00:37:42.537 }, 00:37:42.537 { 00:37:42.537 "method": "bdev_wait_for_examine" 00:37:42.537 } 00:37:42.537 ] 00:37:42.537 }, 00:37:42.537 { 00:37:42.537 "subsystem": "nbd", 00:37:42.537 "config": [] 00:37:42.537 } 00:37:42.537 ] 00:37:42.537 }' 00:37:42.537 12:12:50 keyring_file -- keyring/file.sh@115 -- # killprocess 376736 00:37:42.537 12:12:50 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 376736 ']' 00:37:42.537 12:12:50 keyring_file -- common/autotest_common.sh@958 -- # kill -0 376736 00:37:42.537 12:12:50 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:42.537 12:12:50 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:42.537 12:12:50 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 376736 00:37:42.537 12:12:50 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:42.537 12:12:50 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:42.537 12:12:50 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 376736' 00:37:42.537 killing process with pid 376736 00:37:42.537 12:12:50 keyring_file -- common/autotest_common.sh@973 -- # kill 376736 00:37:42.537 Received shutdown signal, test time was about 1.000000 seconds 00:37:42.537 00:37:42.537 Latency(us) 00:37:42.537 [2024-12-09T11:12:50.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:42.537 [2024-12-09T11:12:50.423Z] =================================================================================================================== 00:37:42.537 [2024-12-09T11:12:50.424Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:42.538 12:12:50 keyring_file -- common/autotest_common.sh@978 -- # wait 376736 00:37:42.538 12:12:50 keyring_file -- keyring/file.sh@118 -- # bperfpid=378542 00:37:42.538 12:12:50 keyring_file -- keyring/file.sh@120 -- # waitforlisten 378542 /var/tmp/bperf.sock 00:37:42.538 12:12:50 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 378542 ']' 00:37:42.538 12:12:50 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:42.538 12:12:50 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:42.538 12:12:50 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:42.538 12:12:50 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:42.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:42.538 12:12:50 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:42.538 12:12:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:42.538 12:12:50 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:37:42.538 "subsystems": [ 00:37:42.538 { 00:37:42.538 "subsystem": "keyring", 00:37:42.538 "config": [ 00:37:42.538 { 00:37:42.538 "method": "keyring_file_add_key", 00:37:42.538 "params": { 00:37:42.538 "name": "key0", 00:37:42.538 "path": "/tmp/tmp.Z5K8G1Xl6P" 00:37:42.538 } 00:37:42.538 }, 00:37:42.538 { 00:37:42.538 "method": "keyring_file_add_key", 00:37:42.538 "params": { 00:37:42.538 "name": "key1", 00:37:42.538 "path": "/tmp/tmp.NPaqVI7ALG" 00:37:42.538 } 00:37:42.538 } 00:37:42.538 ] 00:37:42.538 }, 00:37:42.538 { 00:37:42.538 "subsystem": "iobuf", 00:37:42.538 "config": [ 00:37:42.538 { 00:37:42.538 "method": "iobuf_set_options", 00:37:42.538 "params": { 00:37:42.538 "small_pool_count": 8192, 00:37:42.538 "large_pool_count": 1024, 00:37:42.538 "small_bufsize": 8192, 00:37:42.538 "large_bufsize": 135168, 00:37:42.538 "enable_numa": false 00:37:42.538 } 00:37:42.538 } 00:37:42.538 ] 00:37:42.538 }, 00:37:42.538 { 00:37:42.538 "subsystem": "sock", 00:37:42.538 "config": [ 00:37:42.538 { 00:37:42.538 "method": "sock_set_default_impl", 00:37:42.538 "params": { 00:37:42.538 "impl_name": "posix" 00:37:42.538 } 00:37:42.538 }, 00:37:42.538 { 00:37:42.538 "method": "sock_impl_set_options", 00:37:42.538 "params": { 00:37:42.538 "impl_name": "ssl", 00:37:42.538 "recv_buf_size": 4096, 00:37:42.538 "send_buf_size": 4096, 00:37:42.538 "enable_recv_pipe": true, 00:37:42.538 "enable_quickack": false, 00:37:42.538 "enable_placement_id": 0, 00:37:42.538 "enable_zerocopy_send_server": true, 00:37:42.538 "enable_zerocopy_send_client": false, 00:37:42.538 "zerocopy_threshold": 0, 00:37:42.538 "tls_version": 0, 00:37:42.538 "enable_ktls": false 00:37:42.538 } 00:37:42.538 }, 00:37:42.538 { 00:37:42.538 "method": "sock_impl_set_options", 00:37:42.538 "params": { 00:37:42.538 "impl_name": "posix", 00:37:42.538 "recv_buf_size": 2097152, 00:37:42.538 "send_buf_size": 2097152, 00:37:42.538 "enable_recv_pipe": true, 00:37:42.538 "enable_quickack": false, 00:37:42.538 "enable_placement_id": 0, 00:37:42.538 "enable_zerocopy_send_server": true, 00:37:42.538 "enable_zerocopy_send_client": false, 00:37:42.538 "zerocopy_threshold": 0, 00:37:42.538 "tls_version": 0, 00:37:42.538 "enable_ktls": false 00:37:42.538 } 00:37:42.538 } 00:37:42.538 ] 00:37:42.538 }, 00:37:42.538 { 00:37:42.538 "subsystem": "vmd", 00:37:42.538 "config": [] 00:37:42.538 }, 00:37:42.538 { 00:37:42.538 "subsystem": "accel", 00:37:42.538 "config": [ 00:37:42.538 { 00:37:42.538 "method": "accel_set_options", 00:37:42.538 "params": { 00:37:42.538 "small_cache_size": 128, 00:37:42.538 "large_cache_size": 16, 00:37:42.538 "task_count": 2048, 00:37:42.538 "sequence_count": 2048, 00:37:42.538 "buf_count": 2048 00:37:42.538 } 00:37:42.538 } 00:37:42.538 ] 00:37:42.538 }, 00:37:42.538 { 00:37:42.538 "subsystem": "bdev", 00:37:42.538 "config": [ 00:37:42.538 { 00:37:42.538 "method": "bdev_set_options", 00:37:42.538 "params": { 00:37:42.538 "bdev_io_pool_size": 65535, 00:37:42.538 "bdev_io_cache_size": 256, 00:37:42.538 "bdev_auto_examine": true, 00:37:42.538 "iobuf_small_cache_size": 128, 00:37:42.538 "iobuf_large_cache_size": 16 00:37:42.538 } 00:37:42.538 }, 00:37:42.538 { 00:37:42.538 "method": "bdev_raid_set_options", 00:37:42.538 "params": { 00:37:42.538 "process_window_size_kb": 1024, 00:37:42.538 "process_max_bandwidth_mb_sec": 0 00:37:42.538 } 00:37:42.538 }, 00:37:42.538 { 00:37:42.538 "method": "bdev_iscsi_set_options", 00:37:42.538 "params": { 00:37:42.538 "timeout_sec": 30 00:37:42.538 } 00:37:42.538 }, 00:37:42.538 { 00:37:42.538 "method": "bdev_nvme_set_options", 00:37:42.538 "params": { 00:37:42.538 "action_on_timeout": "none", 00:37:42.538 "timeout_us": 0, 00:37:42.538 "timeout_admin_us": 0, 00:37:42.538 "keep_alive_timeout_ms": 10000, 00:37:42.538 "arbitration_burst": 0, 00:37:42.538 "low_priority_weight": 0, 00:37:42.538 "medium_priority_weight": 0, 00:37:42.538 "high_priority_weight": 0, 00:37:42.538 "nvme_adminq_poll_period_us": 10000, 00:37:42.538 "nvme_ioq_poll_period_us": 0, 00:37:42.538 "io_queue_requests": 512, 00:37:42.538 "delay_cmd_submit": true, 00:37:42.538 "transport_retry_count": 4, 00:37:42.538 "bdev_retry_count": 3, 00:37:42.538 "transport_ack_timeout": 0, 00:37:42.538 "ctrlr_loss_timeout_sec": 0, 00:37:42.538 "reconnect_delay_sec": 0, 00:37:42.538 "fast_io_fail_timeout_sec": 0, 00:37:42.538 "disable_auto_failback": false, 00:37:42.538 "generate_uuids": false, 00:37:42.538 "transport_tos": 0, 00:37:42.538 "nvme_error_stat": false, 00:37:42.538 "rdma_srq_size": 0, 00:37:42.538 "io_path_stat": false, 00:37:42.538 "allow_accel_sequence": false, 00:37:42.538 "rdma_max_cq_size": 0, 00:37:42.538 "rdma_cm_event_timeout_ms": 0, 00:37:42.538 "dhchap_digests": [ 00:37:42.538 "sha256", 00:37:42.538 "sha384", 00:37:42.538 "sha512" 00:37:42.538 ], 00:37:42.538 "dhchap_dhgroups": [ 00:37:42.538 "null", 00:37:42.538 "ffdhe2048", 00:37:42.538 "ffdhe3072", 00:37:42.538 "ffdhe4096", 00:37:42.538 "ffdhe6144", 00:37:42.538 "ffdhe8192" 00:37:42.538 ] 00:37:42.538 } 00:37:42.538 }, 00:37:42.538 { 00:37:42.538 "method": "bdev_nvme_attach_controller", 00:37:42.538 "params": { 00:37:42.538 "name": "nvme0", 00:37:42.538 "trtype": "TCP", 00:37:42.538 "adrfam": "IPv4", 00:37:42.538 "traddr": "127.0.0.1", 00:37:42.538 "trsvcid": "4420", 00:37:42.538 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:42.538 "prchk_reftag": false, 00:37:42.538 "prchk_guard": false, 00:37:42.538 "ctrlr_loss_timeout_sec": 0, 00:37:42.538 "reconnect_delay_sec": 0, 00:37:42.538 "fast_io_fail_timeout_sec": 0, 00:37:42.538 "psk": "key0", 00:37:42.538 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:42.538 "hdgst": false, 00:37:42.538 "ddgst": false, 00:37:42.538 "multipath": "multipath" 00:37:42.538 } 00:37:42.538 }, 00:37:42.538 { 00:37:42.538 "method": "bdev_nvme_set_hotplug", 00:37:42.538 "params": { 00:37:42.538 "period_us": 100000, 00:37:42.538 "enable": false 00:37:42.538 } 00:37:42.538 }, 00:37:42.538 { 00:37:42.538 "method": "bdev_wait_for_examine" 00:37:42.538 } 00:37:42.538 ] 00:37:42.538 }, 00:37:42.538 { 00:37:42.538 "subsystem": "nbd", 00:37:42.538 "config": [] 00:37:42.538 } 00:37:42.538 ] 00:37:42.538 }' 00:37:42.797 [2024-12-09 12:12:50.462461] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:37:42.797 [2024-12-09 12:12:50.462521] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid378542 ] 00:37:42.797 [2024-12-09 12:12:50.546054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:42.797 [2024-12-09 12:12:50.574577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:43.056 [2024-12-09 12:12:50.718400] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:43.623 12:12:51 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:43.623 12:12:51 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:43.623 12:12:51 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:37:43.623 12:12:51 keyring_file -- keyring/file.sh@121 -- # jq length 00:37:43.623 12:12:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:43.623 12:12:51 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:43.623 12:12:51 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:37:43.623 12:12:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:43.623 12:12:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:43.623 12:12:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:43.623 12:12:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:43.623 12:12:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:43.882 12:12:51 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:37:43.882 12:12:51 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:37:43.882 12:12:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:43.882 12:12:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:43.882 12:12:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:43.882 12:12:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:43.882 12:12:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:44.143 12:12:51 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:37:44.143 12:12:51 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:37:44.143 12:12:51 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:37:44.143 12:12:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:44.143 12:12:51 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:37:44.143 12:12:51 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:44.143 12:12:51 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Z5K8G1Xl6P /tmp/tmp.NPaqVI7ALG 00:37:44.143 12:12:51 keyring_file -- keyring/file.sh@20 -- # killprocess 378542 00:37:44.143 12:12:51 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 378542 ']' 00:37:44.143 12:12:51 keyring_file -- common/autotest_common.sh@958 -- # kill -0 378542 00:37:44.143 12:12:51 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:44.143 12:12:51 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:44.143 12:12:51 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 378542 00:37:44.143 12:12:52 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:44.143 12:12:52 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:44.143 12:12:52 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 378542' 00:37:44.143 killing process with pid 378542 00:37:44.143 12:12:52 keyring_file -- common/autotest_common.sh@973 -- # kill 378542 00:37:44.143 Received shutdown signal, test time was about 1.000000 seconds 00:37:44.143 00:37:44.143 Latency(us) 00:37:44.143 [2024-12-09T11:12:52.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:44.143 [2024-12-09T11:12:52.029Z] =================================================================================================================== 00:37:44.143 [2024-12-09T11:12:52.029Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:44.143 12:12:52 keyring_file -- common/autotest_common.sh@978 -- # wait 378542 00:37:44.403 12:12:52 keyring_file -- keyring/file.sh@21 -- # killprocess 376690 00:37:44.403 12:12:52 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 376690 ']' 00:37:44.403 12:12:52 keyring_file -- common/autotest_common.sh@958 -- # kill -0 376690 00:37:44.403 12:12:52 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:44.403 12:12:52 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:44.403 12:12:52 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 376690 00:37:44.403 12:12:52 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:44.403 12:12:52 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:44.403 12:12:52 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 376690' 00:37:44.403 killing process with pid 376690 00:37:44.403 12:12:52 keyring_file -- common/autotest_common.sh@973 -- # kill 376690 00:37:44.403 12:12:52 keyring_file -- common/autotest_common.sh@978 -- # wait 376690 00:37:44.663 00:37:44.663 real 0m11.969s 00:37:44.663 user 0m28.764s 00:37:44.663 sys 0m2.758s 00:37:44.663 12:12:52 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:44.663 12:12:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:44.663 ************************************ 00:37:44.663 END TEST keyring_file 00:37:44.663 ************************************ 00:37:44.663 12:12:52 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:37:44.663 12:12:52 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:44.663 12:12:52 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:44.663 12:12:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:44.663 12:12:52 -- common/autotest_common.sh@10 -- # set +x 00:37:44.663 ************************************ 00:37:44.663 START TEST keyring_linux 00:37:44.663 ************************************ 00:37:44.663 12:12:52 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:44.663 Joined session keyring: 329997955 00:37:44.663 * Looking for test storage... 00:37:44.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:44.923 12:12:52 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:44.923 12:12:52 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:37:44.923 12:12:52 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:44.923 12:12:52 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@345 -- # : 1 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:44.923 12:12:52 keyring_linux -- scripts/common.sh@368 -- # return 0 00:37:44.923 12:12:52 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:44.924 12:12:52 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:44.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:44.924 --rc genhtml_branch_coverage=1 00:37:44.924 --rc genhtml_function_coverage=1 00:37:44.924 --rc genhtml_legend=1 00:37:44.924 --rc geninfo_all_blocks=1 00:37:44.924 --rc geninfo_unexecuted_blocks=1 00:37:44.924 00:37:44.924 ' 00:37:44.924 12:12:52 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:44.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:44.924 --rc genhtml_branch_coverage=1 00:37:44.924 --rc genhtml_function_coverage=1 00:37:44.924 --rc genhtml_legend=1 00:37:44.924 --rc geninfo_all_blocks=1 00:37:44.924 --rc geninfo_unexecuted_blocks=1 00:37:44.924 00:37:44.924 ' 00:37:44.924 12:12:52 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:44.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:44.924 --rc genhtml_branch_coverage=1 00:37:44.924 --rc genhtml_function_coverage=1 00:37:44.924 --rc genhtml_legend=1 00:37:44.924 --rc geninfo_all_blocks=1 00:37:44.924 --rc geninfo_unexecuted_blocks=1 00:37:44.924 00:37:44.924 ' 00:37:44.924 12:12:52 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:44.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:44.924 --rc genhtml_branch_coverage=1 00:37:44.924 --rc genhtml_function_coverage=1 00:37:44.924 --rc genhtml_legend=1 00:37:44.924 --rc geninfo_all_blocks=1 00:37:44.924 --rc geninfo_unexecuted_blocks=1 00:37:44.924 00:37:44.924 ' 00:37:44.924 12:12:52 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:44.924 12:12:52 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@23 -- # IRDMA_ENA=1 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:44.924 12:12:52 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:37:44.924 12:12:52 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:44.924 12:12:52 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:44.924 12:12:52 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:44.924 12:12:52 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.924 12:12:52 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.924 12:12:52 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.924 12:12:52 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:44.924 12:12:52 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@52 -- # : 0 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@53 -- # export NVMF_APP_SHM_ID 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@54 -- # build_nvmf_app_args 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@26 -- # '[' 0 -eq 1 ']' 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@30 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@32 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@34 -- # '[' '' -eq 1 ']' 00:37:44.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 34: [: : integer expression expected 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@38 -- # '[' -n '' ']' 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@40 -- # '[' 0 -eq 1 ']' 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@56 -- # have_pci_nics=0 00:37:44.924 12:12:52 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:44.924 12:12:52 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:44.924 12:12:52 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:44.924 12:12:52 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:44.924 12:12:52 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:44.924 12:12:52 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:44.924 12:12:52 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:44.924 12:12:52 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:44.924 12:12:52 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:44.924 12:12:52 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:44.924 12:12:52 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:44.924 12:12:52 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:44.924 12:12:52 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@729 -- # python - 00:37:44.924 12:12:52 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:44.924 12:12:52 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:44.924 /tmp/:spdk-test:key0 00:37:44.924 12:12:52 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:44.924 12:12:52 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:44.924 12:12:52 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:44.924 12:12:52 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:44.924 12:12:52 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:44.924 12:12:52 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:44.924 12:12:52 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:37:44.924 12:12:52 keyring_linux -- nvmf/common.sh@729 -- # python - 00:37:44.924 12:12:52 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:44.924 12:12:52 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:44.924 /tmp/:spdk-test:key1 00:37:44.924 12:12:52 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=378989 00:37:44.924 12:12:52 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 378989 00:37:44.924 12:12:52 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:44.924 12:12:52 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 378989 ']' 00:37:44.924 12:12:52 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:44.924 12:12:52 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:44.924 12:12:52 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:44.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:44.924 12:12:52 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:44.924 12:12:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:45.183 [2024-12-09 12:12:52.838990] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:37:45.183 [2024-12-09 12:12:52.839045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid378989 ] 00:37:45.183 [2024-12-09 12:12:52.921106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:45.183 [2024-12-09 12:12:52.951253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:45.752 12:12:53 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:45.752 12:12:53 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:37:45.752 12:12:53 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:45.752 12:12:53 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.752 12:12:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:46.012 [2024-12-09 12:12:53.638743] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:46.012 null0 00:37:46.012 [2024-12-09 12:12:53.670796] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:46.012 [2024-12-09 12:12:53.671130] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:46.012 12:12:53 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.012 12:12:53 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:46.012 584827220 00:37:46.012 12:12:53 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:46.012 749929389 00:37:46.012 12:12:53 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=379317 00:37:46.012 12:12:53 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 379317 /var/tmp/bperf.sock 00:37:46.012 12:12:53 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:46.012 12:12:53 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 379317 ']' 00:37:46.012 12:12:53 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:46.012 12:12:53 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:46.012 12:12:53 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:46.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:46.012 12:12:53 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:46.012 12:12:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:46.012 [2024-12-09 12:12:53.750186] Starting SPDK v25.01-pre git sha1 427915fc6 / DPDK 24.03.0 initialization... 00:37:46.012 [2024-12-09 12:12:53.750237] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid379317 ] 00:37:46.012 [2024-12-09 12:12:53.832880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:46.012 [2024-12-09 12:12:53.862793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:46.952 12:12:54 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:46.952 12:12:54 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:37:46.952 12:12:54 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:46.952 12:12:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:46.952 12:12:54 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:46.952 12:12:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:47.213 12:12:54 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:47.213 12:12:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:47.472 [2024-12-09 12:12:55.104030] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:47.472 nvme0n1 00:37:47.472 12:12:55 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:47.472 12:12:55 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:47.472 12:12:55 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:47.472 12:12:55 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:47.472 12:12:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:47.472 12:12:55 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:47.731 12:12:55 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:47.731 12:12:55 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:47.731 12:12:55 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:47.731 12:12:55 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:47.731 12:12:55 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:47.731 12:12:55 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:47.731 12:12:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:47.731 12:12:55 keyring_linux -- keyring/linux.sh@25 -- # sn=584827220 00:37:47.731 12:12:55 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:47.731 12:12:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:47.731 12:12:55 keyring_linux -- keyring/linux.sh@26 -- # [[ 584827220 == \5\8\4\8\2\7\2\2\0 ]] 00:37:47.731 12:12:55 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 584827220 00:37:47.731 12:12:55 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:47.731 12:12:55 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:47.991 Running I/O for 1 seconds... 00:37:48.928 23918.00 IOPS, 93.43 MiB/s 00:37:48.928 Latency(us) 00:37:48.928 [2024-12-09T11:12:56.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:48.928 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:48.928 nvme0n1 : 1.01 23917.73 93.43 0.00 0.00 5335.26 4369.07 12014.93 00:37:48.928 [2024-12-09T11:12:56.814Z] =================================================================================================================== 00:37:48.928 [2024-12-09T11:12:56.814Z] Total : 23917.73 93.43 0.00 0.00 5335.26 4369.07 12014.93 00:37:48.928 { 00:37:48.928 "results": [ 00:37:48.928 { 00:37:48.928 "job": "nvme0n1", 00:37:48.928 "core_mask": "0x2", 00:37:48.928 "workload": "randread", 00:37:48.928 "status": "finished", 00:37:48.928 "queue_depth": 128, 00:37:48.928 "io_size": 4096, 00:37:48.928 "runtime": 1.005363, 00:37:48.928 "iops": 23917.729218202778, 00:37:48.928 "mibps": 93.4286297586046, 00:37:48.929 "io_failed": 0, 00:37:48.929 "io_timeout": 0, 00:37:48.929 "avg_latency_us": 5335.258887132995, 00:37:48.929 "min_latency_us": 4369.066666666667, 00:37:48.929 "max_latency_us": 12014.933333333332 00:37:48.929 } 00:37:48.929 ], 00:37:48.929 "core_count": 1 00:37:48.929 } 00:37:48.929 12:12:56 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:48.929 12:12:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:49.189 12:12:56 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:49.189 12:12:56 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:49.189 12:12:56 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:49.189 12:12:56 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:49.189 12:12:56 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:49.189 12:12:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:49.189 12:12:57 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:49.189 12:12:57 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:49.189 12:12:57 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:49.189 12:12:57 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:49.189 12:12:57 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:37:49.189 12:12:57 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:49.189 12:12:57 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:49.189 12:12:57 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:49.189 12:12:57 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:49.189 12:12:57 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:49.189 12:12:57 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:49.189 12:12:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:49.451 [2024-12-09 12:12:57.190082] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:49.451 [2024-12-09 12:12:57.190819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235d620 (107): Transport endpoint is not connected 00:37:49.451 [2024-12-09 12:12:57.191816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235d620 (9): Bad file descriptor 00:37:49.451 [2024-12-09 12:12:57.192818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:37:49.451 [2024-12-09 12:12:57.192825] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:49.451 [2024-12-09 12:12:57.192831] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:49.451 [2024-12-09 12:12:57.192842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:37:49.451 request: 00:37:49.451 { 00:37:49.451 "name": "nvme0", 00:37:49.451 "trtype": "tcp", 00:37:49.451 "traddr": "127.0.0.1", 00:37:49.451 "adrfam": "ipv4", 00:37:49.451 "trsvcid": "4420", 00:37:49.451 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:49.451 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:49.451 "prchk_reftag": false, 00:37:49.451 "prchk_guard": false, 00:37:49.451 "hdgst": false, 00:37:49.451 "ddgst": false, 00:37:49.451 "psk": ":spdk-test:key1", 00:37:49.451 "allow_unrecognized_csi": false, 00:37:49.451 "method": "bdev_nvme_attach_controller", 00:37:49.451 "req_id": 1 00:37:49.451 } 00:37:49.451 Got JSON-RPC error response 00:37:49.451 response: 00:37:49.451 { 00:37:49.451 "code": -5, 00:37:49.451 "message": "Input/output error" 00:37:49.451 } 00:37:49.451 12:12:57 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:37:49.451 12:12:57 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:49.451 12:12:57 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:49.451 12:12:57 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:49.451 12:12:57 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:49.451 12:12:57 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:49.451 12:12:57 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:49.451 12:12:57 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:49.451 12:12:57 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:49.451 12:12:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:49.451 12:12:57 keyring_linux -- keyring/linux.sh@33 -- # sn=584827220 00:37:49.451 12:12:57 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 584827220 00:37:49.451 1 links removed 00:37:49.451 12:12:57 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:49.451 12:12:57 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:49.451 12:12:57 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:49.451 12:12:57 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:49.451 12:12:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:49.451 12:12:57 keyring_linux -- keyring/linux.sh@33 -- # sn=749929389 00:37:49.451 12:12:57 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 749929389 00:37:49.451 1 links removed 00:37:49.451 12:12:57 keyring_linux -- keyring/linux.sh@41 -- # killprocess 379317 00:37:49.451 12:12:57 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 379317 ']' 00:37:49.451 12:12:57 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 379317 00:37:49.451 12:12:57 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:37:49.451 12:12:57 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:49.451 12:12:57 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 379317 00:37:49.451 12:12:57 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:49.451 12:12:57 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:49.451 12:12:57 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 379317' 00:37:49.451 killing process with pid 379317 00:37:49.451 12:12:57 keyring_linux -- common/autotest_common.sh@973 -- # kill 379317 00:37:49.451 Received shutdown signal, test time was about 1.000000 seconds 00:37:49.451 00:37:49.451 Latency(us) 00:37:49.451 [2024-12-09T11:12:57.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:49.451 [2024-12-09T11:12:57.337Z] =================================================================================================================== 00:37:49.451 [2024-12-09T11:12:57.337Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:49.451 12:12:57 keyring_linux -- common/autotest_common.sh@978 -- # wait 379317 00:37:49.712 12:12:57 keyring_linux -- keyring/linux.sh@42 -- # killprocess 378989 00:37:49.712 12:12:57 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 378989 ']' 00:37:49.712 12:12:57 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 378989 00:37:49.712 12:12:57 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:37:49.712 12:12:57 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:49.712 12:12:57 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 378989 00:37:49.712 12:12:57 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:49.712 12:12:57 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:49.712 12:12:57 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 378989' 00:37:49.712 killing process with pid 378989 00:37:49.712 12:12:57 keyring_linux -- common/autotest_common.sh@973 -- # kill 378989 00:37:49.712 12:12:57 keyring_linux -- common/autotest_common.sh@978 -- # wait 378989 00:37:49.971 00:37:49.971 real 0m5.200s 00:37:49.971 user 0m9.735s 00:37:49.971 sys 0m1.343s 00:37:49.971 12:12:57 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:49.971 12:12:57 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:49.971 ************************************ 00:37:49.971 END TEST keyring_linux 00:37:49.971 ************************************ 00:37:49.971 12:12:57 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:37:49.971 12:12:57 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:37:49.971 12:12:57 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:37:49.971 12:12:57 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:37:49.971 12:12:57 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:37:49.971 12:12:57 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:37:49.971 12:12:57 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:37:49.971 12:12:57 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:49.971 12:12:57 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:37:49.971 12:12:57 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:49.971 12:12:57 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:37:49.971 12:12:57 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:49.971 12:12:57 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:49.971 12:12:57 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:37:49.971 12:12:57 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:37:49.971 12:12:57 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:37:49.971 12:12:57 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:37:49.971 12:12:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:49.971 12:12:57 -- common/autotest_common.sh@10 -- # set +x 00:37:49.971 12:12:57 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:37:49.971 12:12:57 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:37:49.971 12:12:57 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:37:49.971 12:12:57 -- common/autotest_common.sh@10 -- # set +x 00:37:58.103 INFO: APP EXITING 00:37:58.103 INFO: killing all VMs 00:37:58.103 INFO: killing vhost app 00:37:58.103 INFO: EXIT DONE 00:38:00.645 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:38:00.645 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:38:00.645 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:38:00.905 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:38:00.905 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:38:00.905 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:38:00.905 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:38:00.905 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:38:00.905 0000:65:00.0 (144d a80a): Already using the nvme driver 00:38:00.905 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:38:00.905 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:38:00.905 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:38:00.905 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:38:00.905 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:38:01.164 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:38:01.164 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:38:01.164 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:05.370 Cleaning 00:38:05.370 Removing: /var/run/dpdk/spdk0/config 00:38:05.370 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:05.370 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:05.370 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:05.370 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:05.370 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:05.370 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:05.370 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:05.370 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:05.370 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:05.370 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:05.370 Removing: /var/run/dpdk/spdk1/config 00:38:05.370 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:05.370 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:05.370 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:05.370 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:05.370 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:05.370 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:05.370 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:05.370 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:05.370 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:05.370 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:05.370 Removing: /var/run/dpdk/spdk2/config 00:38:05.370 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:05.370 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:05.370 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:05.370 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:05.370 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:05.370 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:05.370 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:05.370 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:05.371 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:05.371 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:05.371 Removing: /var/run/dpdk/spdk3/config 00:38:05.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:05.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:05.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:05.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:05.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:05.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:05.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:05.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:05.371 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:05.371 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:05.371 Removing: /var/run/dpdk/spdk4/config 00:38:05.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:05.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:05.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:05.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:05.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:05.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:05.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:05.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:05.371 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:05.371 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:05.371 Removing: /dev/shm/bdev_svc_trace.1 00:38:05.371 Removing: /dev/shm/nvmf_trace.0 00:38:05.371 Removing: /dev/shm/spdk_tgt_trace.pid3998745 00:38:05.371 Removing: /var/run/dpdk/spdk0 00:38:05.371 Removing: /var/run/dpdk/spdk1 00:38:05.371 Removing: /var/run/dpdk/spdk2 00:38:05.371 Removing: /var/run/dpdk/spdk3 00:38:05.371 Removing: /var/run/dpdk/spdk4 00:38:05.371 Removing: /var/run/dpdk/spdk_pid101638 00:38:05.371 Removing: /var/run/dpdk/spdk_pid109640 00:38:05.371 Removing: /var/run/dpdk/spdk_pid110955 00:38:05.371 Removing: /var/run/dpdk/spdk_pid112772 00:38:05.371 Removing: /var/run/dpdk/spdk_pid114335 00:38:05.371 Removing: /var/run/dpdk/spdk_pid120004 00:38:05.371 Removing: /var/run/dpdk/spdk_pid125287 00:38:05.371 Removing: /var/run/dpdk/spdk_pid130169 00:38:05.371 Removing: /var/run/dpdk/spdk_pid139267 00:38:05.371 Removing: /var/run/dpdk/spdk_pid139317 00:38:05.371 Removing: /var/run/dpdk/spdk_pid144493 00:38:05.371 Removing: /var/run/dpdk/spdk_pid144655 00:38:05.371 Removing: /var/run/dpdk/spdk_pid144984 00:38:05.371 Removing: /var/run/dpdk/spdk_pid145400 00:38:05.371 Removing: /var/run/dpdk/spdk_pid145533 00:38:05.371 Removing: /var/run/dpdk/spdk_pid151032 00:38:05.371 Removing: /var/run/dpdk/spdk_pid151734 00:38:05.371 Removing: /var/run/dpdk/spdk_pid157037 00:38:05.371 Removing: /var/run/dpdk/spdk_pid160436 00:38:05.371 Removing: /var/run/dpdk/spdk_pid167337 00:38:05.371 Removing: /var/run/dpdk/spdk_pid173877 00:38:05.371 Removing: /var/run/dpdk/spdk_pid18186 00:38:05.371 Removing: /var/run/dpdk/spdk_pid184122 00:38:05.371 Removing: /var/run/dpdk/spdk_pid192800 00:38:05.371 Removing: /var/run/dpdk/spdk_pid192802 00:38:05.371 Removing: /var/run/dpdk/spdk_pid2021 00:38:05.371 Removing: /var/run/dpdk/spdk_pid216278 00:38:05.371 Removing: /var/run/dpdk/spdk_pid217160 00:38:05.371 Removing: /var/run/dpdk/spdk_pid217903 00:38:05.371 Removing: /var/run/dpdk/spdk_pid218589 00:38:05.371 Removing: /var/run/dpdk/spdk_pid219650 00:38:05.371 Removing: /var/run/dpdk/spdk_pid220338 00:38:05.371 Removing: /var/run/dpdk/spdk_pid221019 00:38:05.371 Removing: /var/run/dpdk/spdk_pid221780 00:38:05.371 Removing: /var/run/dpdk/spdk_pid227027 00:38:05.371 Removing: /var/run/dpdk/spdk_pid227298 00:38:05.371 Removing: /var/run/dpdk/spdk_pid234469 00:38:05.371 Removing: /var/run/dpdk/spdk_pid234827 00:38:05.371 Removing: /var/run/dpdk/spdk_pid241302 00:38:05.371 Removing: /var/run/dpdk/spdk_pid246343 00:38:05.371 Removing: /var/run/dpdk/spdk_pid258019 00:38:05.371 Removing: /var/run/dpdk/spdk_pid258689 00:38:05.371 Removing: /var/run/dpdk/spdk_pid263846 00:38:05.371 Removing: /var/run/dpdk/spdk_pid264195 00:38:05.371 Removing: /var/run/dpdk/spdk_pid269689 00:38:05.371 Removing: /var/run/dpdk/spdk_pid276552 00:38:05.371 Removing: /var/run/dpdk/spdk_pid279518 00:38:05.371 Removing: /var/run/dpdk/spdk_pid291673 00:38:05.371 Removing: /var/run/dpdk/spdk_pid302330 00:38:05.371 Removing: /var/run/dpdk/spdk_pid304337 00:38:05.371 Removing: /var/run/dpdk/spdk_pid305350 00:38:05.371 Removing: /var/run/dpdk/spdk_pid3059 00:38:05.371 Removing: /var/run/dpdk/spdk_pid325175 00:38:05.371 Removing: /var/run/dpdk/spdk_pid329895 00:38:05.371 Removing: /var/run/dpdk/spdk_pid333080 00:38:05.371 Removing: /var/run/dpdk/spdk_pid340508 00:38:05.371 Removing: /var/run/dpdk/spdk_pid340516 00:38:05.371 Removing: /var/run/dpdk/spdk_pid346396 00:38:05.371 Removing: /var/run/dpdk/spdk_pid348757 00:38:05.371 Removing: /var/run/dpdk/spdk_pid351102 00:38:05.371 Removing: /var/run/dpdk/spdk_pid352349 00:38:05.371 Removing: /var/run/dpdk/spdk_pid354806 00:38:05.371 Removing: /var/run/dpdk/spdk_pid356136 00:38:05.371 Removing: /var/run/dpdk/spdk_pid366229 00:38:05.371 Removing: /var/run/dpdk/spdk_pid366839 00:38:05.371 Removing: /var/run/dpdk/spdk_pid367421 00:38:05.371 Removing: /var/run/dpdk/spdk_pid370828 00:38:05.371 Removing: /var/run/dpdk/spdk_pid371496 00:38:05.371 Removing: /var/run/dpdk/spdk_pid372025 00:38:05.371 Removing: /var/run/dpdk/spdk_pid376690 00:38:05.371 Removing: /var/run/dpdk/spdk_pid376736 00:38:05.371 Removing: /var/run/dpdk/spdk_pid378542 00:38:05.371 Removing: /var/run/dpdk/spdk_pid378989 00:38:05.371 Removing: /var/run/dpdk/spdk_pid379317 00:38:05.371 Removing: /var/run/dpdk/spdk_pid3997257 00:38:05.371 Removing: /var/run/dpdk/spdk_pid3998745 00:38:05.371 Removing: /var/run/dpdk/spdk_pid3999593 00:38:05.371 Removing: /var/run/dpdk/spdk_pid4000741 00:38:05.371 Removing: /var/run/dpdk/spdk_pid4001047 00:38:05.371 Removing: /var/run/dpdk/spdk_pid4002587 00:38:05.371 Removing: /var/run/dpdk/spdk_pid4002610 00:38:05.371 Removing: /var/run/dpdk/spdk_pid4003078 00:38:05.371 Removing: /var/run/dpdk/spdk_pid4004112 00:38:05.371 Removing: /var/run/dpdk/spdk_pid4004681 00:38:05.371 Removing: /var/run/dpdk/spdk_pid4005073 00:38:05.371 Removing: /var/run/dpdk/spdk_pid4005471 00:38:05.371 Removing: /var/run/dpdk/spdk_pid4005882 00:38:05.371 Removing: /var/run/dpdk/spdk_pid4006286 00:38:05.371 Removing: /var/run/dpdk/spdk_pid4006618 00:38:05.371 Removing: /var/run/dpdk/spdk_pid4006760 00:38:05.371 Removing: /var/run/dpdk/spdk_pid4007082 00:38:05.371 Removing: /var/run/dpdk/spdk_pid4008132 00:38:05.371 Removing: /var/run/dpdk/spdk_pid4011398 00:38:05.371 Removing: /var/run/dpdk/spdk_pid4011769 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4012132 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4012432 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4012835 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4012854 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4013364 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4013560 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4013921 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4013959 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4014301 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4014388 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4015078 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4015233 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4015525 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4020350 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4025425 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4037762 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4038473 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4043606 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4044075 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4049287 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4056941 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4060037 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4072424 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4083278 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4085303 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4086373 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4107775 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4112634 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4168823 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4175220 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4182405 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4188 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4190305 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4190307 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4191313 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4192310 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4193318 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4193987 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4193998 00:38:05.632 Removing: /var/run/dpdk/spdk_pid453 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4774 00:38:05.632 Removing: /var/run/dpdk/spdk_pid4910 00:38:05.632 Removing: /var/run/dpdk/spdk_pid500 00:38:05.632 Removing: /var/run/dpdk/spdk_pid5127 00:38:05.632 Removing: /var/run/dpdk/spdk_pid51747 00:38:05.632 Removing: /var/run/dpdk/spdk_pid543 00:38:05.632 Removing: /var/run/dpdk/spdk_pid57507 00:38:05.632 Removing: /var/run/dpdk/spdk_pid59665 00:38:05.632 Removing: /var/run/dpdk/spdk_pid61746 00:38:05.632 Removing: /var/run/dpdk/spdk_pid61785 00:38:05.632 Removing: /var/run/dpdk/spdk_pid62102 00:38:05.632 Removing: /var/run/dpdk/spdk_pid62121 00:38:05.632 Removing: /var/run/dpdk/spdk_pid62720 00:38:05.632 Removing: /var/run/dpdk/spdk_pid64837 00:38:05.632 Removing: /var/run/dpdk/spdk_pid6549 00:38:05.632 Removing: /var/run/dpdk/spdk_pid65581 00:38:05.632 Removing: /var/run/dpdk/spdk_pid65996 00:38:05.632 Removing: /var/run/dpdk/spdk_pid68665 00:38:05.893 Removing: /var/run/dpdk/spdk_pid69377 00:38:05.893 Removing: /var/run/dpdk/spdk_pid70231 00:38:05.893 Removing: /var/run/dpdk/spdk_pid75159 00:38:05.893 Removing: /var/run/dpdk/spdk_pid7946 00:38:05.893 Removing: /var/run/dpdk/spdk_pid81861 00:38:05.893 Removing: /var/run/dpdk/spdk_pid81862 00:38:05.893 Removing: /var/run/dpdk/spdk_pid81863 00:38:05.893 Removing: /var/run/dpdk/spdk_pid86562 00:38:05.893 Removing: /var/run/dpdk/spdk_pid96815 00:38:05.893 Clean 00:38:05.893 12:13:13 -- common/autotest_common.sh@1453 -- # return 0 00:38:05.893 12:13:13 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:38:05.893 12:13:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:05.893 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:38:05.893 12:13:13 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:38:05.893 12:13:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:05.893 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:38:05.893 12:13:13 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:05.893 12:13:13 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:05.893 12:13:13 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:05.893 12:13:13 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:38:05.893 12:13:13 -- spdk/autotest.sh@398 -- # hostname 00:38:05.893 12:13:13 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:06.154 geninfo: WARNING: invalid characters removed from testname! 00:38:32.842 12:13:39 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:34.755 12:13:42 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:37.299 12:13:44 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:38.680 12:13:46 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:40.063 12:13:47 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:41.975 12:13:49 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:43.358 12:13:51 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:43.358 12:13:51 -- spdk/autorun.sh@1 -- $ timing_finish 00:38:43.358 12:13:51 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:38:43.358 12:13:51 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:43.358 12:13:51 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:38:43.358 12:13:51 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:43.619 + [[ -n 3912404 ]] 00:38:43.619 + sudo kill 3912404 00:38:43.630 [Pipeline] } 00:38:43.645 [Pipeline] // stage 00:38:43.650 [Pipeline] } 00:38:43.664 [Pipeline] // timeout 00:38:43.669 [Pipeline] } 00:38:43.683 [Pipeline] // catchError 00:38:43.688 [Pipeline] } 00:38:43.701 [Pipeline] // wrap 00:38:43.706 [Pipeline] } 00:38:43.718 [Pipeline] // catchError 00:38:43.727 [Pipeline] stage 00:38:43.730 [Pipeline] { (Epilogue) 00:38:43.742 [Pipeline] catchError 00:38:43.744 [Pipeline] { 00:38:43.757 [Pipeline] echo 00:38:43.759 Cleanup processes 00:38:43.765 [Pipeline] sh 00:38:44.056 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:44.056 392272 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:44.070 [Pipeline] sh 00:38:44.359 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:44.359 ++ grep -v 'sudo pgrep' 00:38:44.359 ++ awk '{print $1}' 00:38:44.359 + sudo kill -9 00:38:44.359 + true 00:38:44.372 [Pipeline] sh 00:38:44.661 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:56.911 [Pipeline] sh 00:38:57.201 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:57.201 Artifacts sizes are good 00:38:57.218 [Pipeline] archiveArtifacts 00:38:57.225 Archiving artifacts 00:38:57.375 [Pipeline] sh 00:38:57.662 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:57.677 [Pipeline] cleanWs 00:38:57.688 [WS-CLEANUP] Deleting project workspace... 00:38:57.688 [WS-CLEANUP] Deferred wipeout is used... 00:38:57.696 [WS-CLEANUP] done 00:38:57.698 [Pipeline] } 00:38:57.714 [Pipeline] // catchError 00:38:57.725 [Pipeline] sh 00:38:58.015 + logger -p user.info -t JENKINS-CI 00:38:58.026 [Pipeline] } 00:38:58.039 [Pipeline] // stage 00:38:58.045 [Pipeline] } 00:38:58.058 [Pipeline] // node 00:38:58.063 [Pipeline] End of Pipeline 00:38:58.094 Finished: SUCCESS